Uber and Lyft drivers’ median hourly wage is just $3.37, report finds — from theguardian.com by Sam Levin
Majority of drivers make less than minimum wage and many end up losing money, according to study published by MIT

Excerpt (emphasis DSC):

Uber and Lyft drivers in the US make a median profit of $3.37 per hour before taxes, according to a new report that suggests a majority of ride-share workers make below minimum wage and that many actually lose money.

Researchers did an analysis of vehicle cost data and a survey of more than 1,100 drivers for the ride-hailing companies for the paper published by the Massachusetts Institute of Technology’s Center for Energy and Environmental Policy Research. The report – which factored in insurance, maintenance, repairs, fuel and other costs – found that 30% of drivers are losing money on the job and that 74% earn less than the minimum wage in their states.

The findings have raised fresh concerns about labor standards in the booming sharing economy as companies such as Uber and Lyft continue to face scrutiny over their treatment of drivers, who are classified as independent contractors and have few rights or protections.

“This business model is not currently sustainable,” said Stephen Zoepf, executive director of the Center for Automotive Research at Stanford University and co-author of the paper. “The companies are losing money. The businesses are being subsidized by [venture capital] money … And the drivers are essentially subsidizing it by working for very low wages.”

 


 

From DSC:
I don’t know enough about this to offer much feedback and/or insights on this sort of thing yet. But while it’s a bit too early for me to tell — and though I’m not myself a driver for Uber or Lyft — this article prompts me to put this type of thing on my radar.

That is, will the business models that arise from such a sharing economy only benefit a handful of owners or upper level managers or will such new business models benefit the majority of their employees? I’m very skeptical in these early stages though, as there aren’t likely medical or dental benefits, retirement contributions, etc. being offered to their employees with these types of companies. It likely depends upon the particular business model(s) and/or organization(s) being considered, but I think that it’s worth many of us watching this area.

 


 

Also see:

The Economics of Ride-Hailing: Driver Revenue, Expenses and Taxes— from ceepr.mit.edu / MIT Center for Energy and Environmental Policy Research by Stephen Zoepf, Stella Chen, Paa Adu, and Gonzalo Pozo

February 2018

We perform a detailed analysis of Uber and Lyft ride-hailing driver economics by pairing results from a survey of over 1100 drivers with detailed vehicle cost information. Results show that per hour worked, median profit from driving is $3.37/hour before taxes, and 74% of drivers earn less than the minimum wage in their state. 30% of drivers are actually losing money once vehicle expenses are included. On a per-mile basis, median gross driver revenue is $0.59/mile but vehicle operating expenses reduce real driver profit to a median of $0.29/mile. For tax purposes the $0.54/mile standard mileage deduction in 2016 means that nearly half of drivers can declare a loss on their taxes. If drivers are fully able to capitalize on these losses for tax purposes, 73.5% of an estimated U.S. market $4.8B in annual ride-hailing driver profit is untaxed.

Keywords: Transportation, Gig Economy, Cost-Bene t Analysis, Tax policy, Labor Center
Full Paper
| Research Brief

 

——-

Addendum on 3/7/18:

The ride-hailing wage war continues

How much do Lyft and Uber drivers really make? After reporting in a study that their median take-home pay was just 3.37/hour—and then getting called out by Uber’s CEO—researchers have significantly revised their findings.

Closer to a living wage: Lead author Stephen Zoepf of Stanford University released a statement on Twitter saying that using two different methods to recalculate the hourly wage, they find a salary of either $8.55 or $10 per hour, after expenses. Zoepf’s team will be doing a larger revision of the paper over the next few weeks.

Still low-balling it?: Uber and Lyft are adamant that even the new numbers underestimate what drivers are actually paid. “While the revised results are not as inaccurate as the original findings, driver earnings are still understated,” says Lyft’s director of communications Adrian Durbin.

The truth is out there: Depending on who’s doing the math, estimates range from $8.55 (Zoepf, et al.) up to over $21 an hour (Uber). In other words, we’re nowhere near a consensus on how much drivers in the gig-economy make.

 ——-

 

Tech companies should stop pretending AI won’t destroy jobs — from technologyreview.com / MIT Technology Review by Kai-Fu Lee
No matter what anyone tells you, we’re not ready for the massive societal upheavals on the way.

Excerpt (emphasis DSC):

The rise of China as an AI superpower isn’t a big deal just for China. The competition between the US and China has sparked intense advances in AI that will be impossible to stop anywhere. The change will be massive, and not all of it good. Inequality will widen. As my Uber driver in Cambridge has already intuited, AI will displace a large number of jobs, which will cause social discontent. Consider the progress of Google DeepMind’s AlphaGo software, which beat the best human players of the board game Go in early 2016. It was subsequently bested by AlphaGo Zero, introduced in 2017, which learned by playing games against itself and within 40 days was superior to all the earlier versions. Now imagine those improvements transferring to areas like customer service, telemarketing, assembly lines, reception desks, truck driving, and other routine blue-collar and white-­collar work. It will soon be obvious that half of our job tasks can be done better at almost no cost by AI and robots. This will be the fastest transition humankind has experienced, and we’re not ready for it.

And finally, there are those who deny that AI has any downside at all—which is the position taken by many of the largest AI companies. It’s unfortunate that AI experts aren’t trying to solve the problem. What’s worse, and unbelievably selfish, is that they actually refuse to acknowledge the problem exists in the first place.

These changes are coming, and we need to tell the truth and the whole truth. We need to find the jobs that AI can’t do and train people to do them. We need to reinvent education. These will be the best of times and the worst of times. If we act rationally and quickly, we can bask in what’s best rather than wallow in what’s worst.

 

From DSC:
If a business has a choice between hiring a human being or having the job done by a piece of software and/or by a robot, which do you think they’ll go with? My guess? It’s all about the money — whichever/whomever will be less expensive will get the job.

However, that way of thinking may cause enormous social unrest if the software and robots leave human beings in the (job search) dust. Do we, as a society, win with this way of thinking? To me, it’s capitalism gone astray. We aren’t caring enough for our fellow members of the human race, people who have to put bread and butter on their tables. People who have to support their families. People who want to make solid contributions to society and/or to pursue their vocation/callings — to have/find purpose in their lives.

 

Others think we’ll be saved by a universal basic income. “Take the extra money made by AI and distribute it to the people who lost their jobs,” they say. “This additional income will help people find their new path, and replace other types of social welfare.” But UBI doesn’t address people’s loss of dignity or meet their need to feel useful. It’s just a convenient way for a beneficiary of the AI revolution to sit back and do nothing.

 

 

To Fight Fatal Infections, Hospitals May Turn to Algorithms — from scientificamerican.com by John McQuaid
Machine learning could speed up diagnoses and improve accuracy

Excerpt:

The CDI algorithm—based on a form of artificial intelligence called machine learning—is at the leading edge of a technological wave starting to hit the U.S. health care industry. After years of experimentation, machine learning’s predictive powers are well-established, and it is poised to move from labs to broad real-world applications, said Zeeshan Syed, who directs Stanford University’s Clinical Inference and Algorithms Program.

“The implications of machine learning are profound,” Syed said. “Yet it also promises to be an unpredictable, disruptive force—likely to alter the way medical decisions are made and put some people out of work.

 

 

Lawyer-Bots Are Shaking Up Jobs — from technologyreview.com by Erin Winick

Excerpt:

Meticulous research, deep study of case law, and intricate argument-building—lawyers have used similar methods to ply their trade for hundreds of years. But they’d better watch out, because artificial intelligence is moving in on the field.

As of 2016, there were over 1,300,000 licensed lawyers and 200,000 paralegals in the U.S. Consultancy group McKinsey estimates that 22 percent of a lawyer’s job and 35 percent of a law clerk’s job can be automated, which means that while humanity won’t be completely overtaken, major businesses and career adjustments aren’t far off (see “Is Technology About to Decimate White-Collar Work?”). In some cases, they’re already here.

 

“If I was the parent of a law student, I would be concerned a bit,” says Todd Solomon, a partner at the law firm McDermott Will & Emery, based in Chicago. “There are fewer opportunities for young lawyers to get trained, and that’s the case outside of AI already. But if you add AI onto that, there are ways that is advancement, and there are ways it is hurting us as well.”

 

So far, AI-powered document discovery tools have had the biggest impact on the field. By training on millions of existing documents, case files, and legal briefs, a machine-learning algorithm can learn to flag the appropriate sources a lawyer needs to craft a case, often more successfully than humans. For example, JPMorgan announced earlier this year that it is using software called Contract Intelligence, or COIN, which can in seconds perform document review tasks that took legal aides 360,000 hours.

People fresh out of law school won’t be spared the impact of automation either. Document-based grunt work is typically a key training ground for first-year associate lawyers, and AI-based products are already stepping in. CaseMine, a legal technology company based in India, builds on document discovery software with what it calls its “virtual associate,” CaseIQ. The system takes an uploaded brief and suggests changes to make it more authoritative, while providing additional documents that can strengthen a lawyer’s arguments.

 

 

Lessons From Artificial Intelligence Pioneers — from gartner.com by Christy Pettey

CIOs are struggling to accelerate deployment of artificial intelligence (AI). A recent Gartner survey of global CIOs found that only 4% of respondents had deployed AI. However, the survey also found that one-fifth of the CIOs are already piloting or planning to pilot AI in the short term.

Such ambition puts these leaders in a challenging position. AI efforts are already stressing staff, skills, and the readiness of in-house and third-party AI products and services. Without effective strategic plans for AI, organizations risk wasting money, falling short in performance and falling behind their business rivals.

Pursue small-scale plans likely to deliver small-scale payoffs that will offer lessons for larger implementations

“AI is just starting to become useful to organizations but many will find that AI faces the usual obstacles to progress of any unproven and unfamiliar technology,” says Whit Andrews, vice president and distinguished analyst at Gartner. “However, early AI projects offer valuable lessons and perspectives for enterprise architecture and technology innovation leaders embarking on pilots and more formal AI efforts.”

So what lessons can we learn from these early AI pioneers?

 

 

Why Artificial Intelligence Researchers Should Be More Paranoid — from wired.com by Tom Simonite

Excerpt:

What to do about that? The report’s main recommendation is that people and companies developing AI technology discuss safety and security more actively and openly—including with policymakers. It also asks AI researchers to adopt a more paranoid mindset and consider how enemies or attackers might repurpose their technologies before releasing them.

 

 

How to Prepare College Graduates for an AI World — from wsj.com by
Northeastern University President Joseph Aoun says schools need to change their focus, quickly

Excerpt:

WSJ: What about adults who are already in the workforce?

DR. AOUN: Society has to provide ways, and higher education has to provide ways, for people to re-educate themselves, reskill themselves or upskill themselves.

That is the part that I see that higher education has not embraced. That’s where there is an enormous opportunity. We look at lifelong learning in higher education as an ancillary operation, as a second-class operation in many cases. We dabble with it, we try to make money out of it, but we don’t embrace it as part of our core mission.

 

 

Inside Amazon’s Artificial Intelligence Flywheel — from wired.com by Steven Levy
How deep learning came to power Alexa, Amazon Web Services, and nearly every other division of the company.

Excerpt:

Amazon loves to use the word flywheel to describe how various parts of its massive business work as a single perpetual motion machine. It now has a powerful AI flywheel, where machine-learning innovations in one part of the company fuel the efforts of other teams, who in turn can build products or offer services to affect other groups, or even the company at large. Offering its machine-learning platforms to outsiders as a paid service makes the effort itself profitable—and in certain cases scoops up yet more data to level up the technology even more.

 

 

 

 

Career Pathways: Five Ways to Connect College and Careers calls for states to help students, their families, and employers unpack the meaning of postsecondary credentials and assess their value in the labor market.

Excerpt:

If students are investing more to go to college, they need to have answers to basic questions about the value of postsecondary education. They need better information to make decisions that have lifelong economic consequences.

Getting a college education is one of the biggest investments people will make in their lives, but the growing complexity of today’s economy makes it difficult for higher education to deliver efficiency and consistent quality. Today’s economy is more intricate than those of decades past.

 

From this press release:

It’s Time to Fix Higher Education’s Tower of Babel, Says Georgetown University Report
The lack of transparency around college and careers leads to costly, uninformed decisions

(Washington, D.C., July 11, 2017) — A new report from the Georgetown University Center on Education and the Workforce (Georgetown Center), Career Pathways: Five Ways to Connect College and Careers, calls for states to help students, their families, and employers unpack the meaning of postsecondary credentials and assess their value in the labor market.

Back when a high school-educated worker could find a good job with decent wages, the question was simply whether or not to go to college. That is no longer the case in today’s economy, which requires at least some college to enter the middle class. The study finds that:

  • The number of postsecondary programs of study more than quintupled between 1985 and 2010 — from 410 to 2,260;
  • The number of colleges and universities more than doubled from 1,850 to 4,720 between 1950 and 2014; and
  • The number of occupations grew from 270 in 1950 to 840 in 2010.

The variety of postsecondary credentials, providers, and online delivery mechanisms has also multiplied rapidly in recent years, underscoring the need for common, measurable outcomes.

College graduates are also showing buyer’s remorse. While they are generally happy with their decision to attend college, more than half would choose a different major, go to a different college, or pursue a different postsecondary credential if they had a chance.

The Georgetown study points out that the lack of information drives the higher education market toward mediocrity. The report argues that postsecondary education and training needs to be more closely aligned to careers to better equip learners and workers with the skills they need to succeed in the 21st century economy and close the skills gap.

The stakes couldn’t be higher for students to make the right decisions. Since 1980, tuition and fees at public four year colleges and universities have grown 19 times faster than family incomes. Students and families want — and need — to know the value they are getting for their investment.

 

 



Also see:

  • Trumping toward college transparency — from linkedin.com by Anthony Carnevale
    The perfect storm is gathering around the need to increase transparency around college and careers. And in accordance with how public policy generally comes about, it might just happen. 


 

 

 

MITReport-OnlineEducation-April2016

 

chargeofMITOEPI-april2016

 

The final report of Massachusetts Institute of Technology’s Online Education Policy Initiative presents findings from discussions among the members of the Institute-wide initiative supported by advice from the advisory group. The report reflects comments and responses received from many sources, including education experts, government education officials, and representatives of university organizations.

 

 

Our findings target four areas: interdisciplinary collaboration, online educational technologies, the profession of the learning engineer, and institutional and organizational change. Focused attention in these areas could significantly advance our understanding of the opportunities and challenges in transforming education.

 

Recommendation 1:
Increase Interdisciplinary Collaboration Across Fields of Research in Higher Education, Using an Integrated Research Agenda

Recommendation 2:
Promote Online as an Important Facilitator in Higher Education

Recommendation 3:
Support the Expanding Profession of the “Learning Engineer”

Recommendation 4:
Foster Institutional and Organizational Change in Higher Education to Implement These Reforms

 

 

 

Also see:
MIT releases online education policy initiative report — from news.mit.edu by Jessica Fujimori, April 1, 2016
New report draws on diverse fields to reflect on digital learning.

Excerpts:

A new MIT report on online education policy draws on diverse fields, from socioeconomics to cognitive science, to analyze the current state of higher education and consider how advances in learning science and online technology might shape its future.

Titled “Online Education: A Catalyst for Higher Education Reform,” the report presents four overarching recommendations, stressing the importance of interdisciplinary collaboration, integration between online and traditional learning, a skilled workforce specializing in digital learning design, and high-level institutional and organizational change.

“There’s so much going on in online education, and it’s moving so quickly, that it’s important to take time to reflect,” says Eric Klopfer, a key participant in the initiative, who is a professor of education and directs the MIT Scheller Teacher Education Program. “One of the goals of the report is to try to help frame the discussion and to pull together some of the pieces of the conversation that are taking place in different arenas but are not necessarily considered in an integrated way,” Willcox says.

 

“We believe that there is a new category of professionals emerging from all this,” Sarma says. “We use the term ‘learning engineer,’ but maybe it’s going to be some other term — who knows?”

These “learning engineers” would have expertise in a discipline as well as in learning science and educational technologies, and would integrate knowledge across fields to design and optimize learning experiences.

“It’s important that this cadre of professionals get recognized as a valuable profession and provided with opportunities for advancement,” Willcox says. “Without people like this, we’re not going to make a transformation in education.”

 

Finally, the report recommends mechanisms to stimulate high-level institutional and organizational change to support the transformation of the industry, such as nurturing change agents and role models, and forming thinking communities to evaluate reform options.

“Policy makers and decision makers at institutions need to be proactive in thinking about this,” says Willcox. “There’s a lot to be learned by looking at industries that have seen this kind of transformation, particularly transformations brought on by digital technologies.”

 

Some items from Bryan Alexander:

 

 

Excerpt:

Today’s students expect their entire experience with an institution to mirror what they see from major online retailers and service providers; personalized, supportive and flexible. However, institutions are having to deliver on these heightened expectations with smaller budgets and less capacity to increase prices than ever before.

Scaling is absolutely critical for higher education institutions in today’s marketplace. Through scaling, institutions can “do more with less”—they can meet the sky-high expectations of today’s discerning students while keeping their costs and prices low.

This Feature highlights some of the approaches today’s institutions are taking to achieving scale.

 

 

A new vision for paying for higher education — from usnews.com by Lauren Camera
How do you build the federal student loan system from the ground up?

Excerpt:

As it stands now, the current system for financing higher education is particularly unfair for poor students, many of whom are forced to borrow more money than their wealthier peers, graduate at a much lower rate and go into default at a much higher rate.

To be sure, the average six-year graduation rate for students seeking a bachelor’s degree is 59.4 percent, but a recent survey of more than 1,000 public and private four-year colleges found that only 51 percent of Pell recipients graduate. And at community colleges, only 23 percent of first-time, full-time students ever receive a degree.

 

 

Is it time for colleges to withdraw from their outdated schedules? — from pri.org by Caroline Lester

Excerpt:

We asked a few college grads what they’d like to change about the current system. Their answers spanned from increasing accessibility, to eliminating lectures, to creating greater support services for students at risk of dropping out.

That last point is key: the vast majority of students who start college don’t graduate.

Community college, state schools, and private universities — six-year completion rates are falling. To Michael Crow, president of Arizona State University, this means something is wrong.

Crow believes that the best way to address America’s higher education woes is to lower the cost of a college education while personalizing teaching. He proposes three big changes…

 

 

Credentialing, free tuition top this week’s news — from ecampusnews.com by Laura Devaney

 

 

 

Some items from Jeff Selingo:

JeffSelingo-Feb2016

 

 

A brief excerpt from newsletter from one of Michigan’s Senators, Debbie Stabenow:

62 percent of students in Michigan graduate #InTheRed with student loan debt. A student who graduated from a 4-year Michigan college or university in 2014 owes on average almost $30,000 in loans, making Michigan 9th in the country on average student loan debt.  Student loan debt in the United States is over $1.3 trillion and is the 2nd highest form of consumer debt.

 

 

…and back from March 2015:

 

RethinkingHE-March2015

 

 

 

From DSC:
Below are some further items that discuss the need for some frameworks, policies, institutes, research, etc. that deal with a variety of game-changing technologies that are quickly coming down the pike (if they aren’t already upon on).  We need such things to help us create a positive future.

Also see Part I of this thread of thinking entitled, “The need for ethics, morals, policies, & serious reflection about what kind of future we want has never been greater!  There have been so many other items that came out since that posting, I felt like I needed to add another one here.

What kind of future do we want? How are we going to insure that we get there?

As the saying goes…”Just because we can do something, doesn’t mean we should.” Or another saying comes to my mind…”What could possibly go wrong with this? It’s a done deal.”

While some of the items below should have very positive impacts on society, I do wonder how long it will take the hackers — the ones who are bent on wreaking havoc — to mess up some of these types of applications…with potentially deadly consequences? Security-related concerns must be dealt with here.


 

5 amazing and alarming things that may be done with your DNA — from washingtonpost.com by Matt McFarland

Excerpt (emphasis DSC):

Venter is leading efforts to use digital technology to analyze humans in ways we never have before, and the results will have huge implications for society. The latest findings he described are currently being written up for scientific publications. Venter didn’t want to usurp the publications, so he wouldn’t dive into extensive detail of how his team has made these breakthroughs. But what he did share offers an exciting and concerning overview of what lies ahead for humanity. There are social, legal and ethical implications to start considering. Here are five examples of how digitizing DNA will change the human experience:

 

 

These are the decisions the Pentagon wants to leave to robots — from defenseone.com by Patrick Tucker
The U.S. military believes its battlefield edge will increasingly depend on automation and artificial intelligence.

Excerpt:

Conducting cyber defensive operations, electronic warfare, and over-the-horizon targeting. “You cannot have a human operator operating at human speed fighting back at determined cyber tech,” Work said. “You are going to need have a learning machine that does that.” He did not say  whether the Pentagon is pursuing the autonomous or automatic deployment of offensive cyber capabilities, a controversial idea to be sure. He also highlighted a number of ways that artificial intelligence could help identify new waveforms to improve electronic warfare.

 

 

Britain should lead way on genetically engineered babies, says Chief Scientific Adviser — from.telegraph.co.uk by Sarah Knapton
Sir Mark Walport, who advises the government on scientific matters, said it could be acceptable to genetically edit human embryos

Excerpt:

Last week more than 150 scientists and campaigners called for a worldwide ban on the practice, claiming it could ‘irrevocably alter the human species’ and lead to a world where inequality and discrimination were ‘inscribed onto the human genome.’

But at a conference in London [on 12/8/15], Sir Mark Walport, who advises the government on scientific matters, said he believed there were ‘circumstances’ in which the genetic editing of human embyros could be ‘acceptable’.

 

 

Cyborg Future: Engineers Build a Chip That Is Part Biological and Part Synthetic — from futurism.com

Excerpt:

Engineers have succeeded in combining an integrated chip with an artificial lipid bilayer membrane containing ATP-powered ion pumps, paving the way for more such artificial systems that combine the biological with the mechanical down the road.

 

 

Robots expected to run half of Japan by 2035 — from engadget.com by Andrew Tarantola
Something-something ‘robot overlords’.

Excerpt:

Data analysts Nomura Research Institute (NRI), led by researcher Yumi Wakao, figure that within the next 20 years, nearly half of all jobs in Japan could be accomplished by robots. Working with Professor Michael Osborne from Oxford University, who had previously investigated the same matter in both the US and UK, the NRI team examined more than 600 jobs and found that “up to 49 percent of jobs could be replaced by computer systems,” according to Wakao.

 

 

 

Cambridge University is opening a £10 million centre to study the impact of AI on humanity — from businessinsider.com by Sam Shead

Excerpt:

Cambridge University announced on [12/3/15] that it is opening a new £10 million research centre to study the impact of artificial intelligence on humanity.

The 806-year-old university said the centre, being funded with a grant from non-profit foundation The Leverhulme Trust, will explore the opportunities and challenges facing humanity as a result of further developments in artificial intelligence.

 

Cambridge-Center-Dec2015

 

 

Tech leaders launch nonprofit to save the world from killer robots — from csmonitor.com by Jessica Mendoza
Elon Musk, Sam Altman, and other tech titans have invested $1 billion in a nonprofit that would help direct artificial intelligence technology toward positive human impact. 

 

 

 

 

2016 will be a pivotal year for social robots — from therobotreport.com by Frank Tobe
1,000 Peppers are selling each month from a big-dollar venture between SoftBank, Alibaba and Foxconn; Jibo just raised another $16 million as it prepares to deliver 7,500+ units in Mar/Apr of 2016; and Buddy, Rokid, Sota and many others are poised to deliver similar forms of social robots.

Excerpt:

These new robots, and the proliferation of mobile robot butlers, guides and kiosks, promise to recognize your voice and face and help you plan your calendar, provide reminders, take pictures of special moments, text, call and videoconference, order fast food, keep watch on your house or office, read recipes, play games, read emotions and interact accordingly, and the list goes on. They are attempting to be analogous to a sharp administrative assistant that knows your schedule, contacts and interests and engages with you about them, helping you stay informed, connected and active.

 

 

IBM opens its artificial mind to the world — from fastcompany.com by Sean Captain
IBM is letting companies plug into its Watson artificial intelligence engine to make sense of speech, text, photos, videos, and sensor data.

Excerpt:

Artificial intelligence is the big, oft-misconstrued catchphrase of the day, making headlines recently with the launch of the new OpenAI organization, backed by Elon Musk, Peter Thiel, and other tech luminaries. AI is neither a synonym for killer robots nor a technology of the future, but one that is already finding new signals in the vast noise of collected data, ranging from weather reports to social media chatter to temperature sensor readings. Today IBM has opened up new access to its AI system, called Watson, with a set of application programming interfaces (APIs) that allow other companies and organizations to feed their data into IBM’s big brain for analysis.

 

 

GE wants to give industrial machines their own social network with Predix Cloud — from fastcompany.com by Sean Captain
GE is selling a new service that promises to predict when a machine will break down…so technicians can preemptively fix it.

 

 

Foresight 2020: The future is filled with 50 billion connected devices — from ibmbigdatahub.com by Erin Monday

Excerpt:

By 2020, there will be over 50 billion connected devices generating continuous data.

This figure is staggering, but is it really a surprise? The world has come a long way from 1992, when the number of computers was roughly equivalent to the population of San Jose. Today, in 2015, there are more connected devices out there than there are human beings. Ubiquitous connectivity is very nearly a reality. Every day, we get a little closer to a time where businesses, governments and consumers are connected by a fluid stream of data and analytics. But what’s driving all this growth?

 

 

Designing robots that learn as effortlessly as babies — from singularityhub.com by Shelly Fan

Excerpt:

A wide-eyed, rosy-cheeked, babbling human baby hardly looks like the ultimate learning machine.

But under the hood, an 18-month-old can outlearn any state-of-the-art artificial intelligence algorithm.

Their secret sauce?

They watch; they imitate; and they extrapolate.

Artificial intelligence researchers have begun to take notice. This week, two separate teams dipped their toes into cognitive psychology and developed new algorithms that teach machines to learn like babies. One instructs computers to imitate; the other, to extrapolate.

 

 

Researchers have found a new way to get machines to learn faster — from fortune.com by  Hilary Brueck

Excerpt:

An international team of data scientists is proud to announce the very latest in machine learning: they’ve built a program that learns… programs. That may not sound impressive at first blush, but making a machine that can learn based on a single example is something that’s been extremely hard to do in the world of artificial intelligence. Machines don’t learn like humans—not as fast, and not as well. And even with this research, they still can’t.

 

 

Team showcase how good Watson is at learning — from adigaskell.org

Excerpt:

Artificial intelligence has undoubtedly come a long way in the last few years, but there is still much to be done to make it intuitive to use.  IBM’s Watson has been one of the most well known exponents during this time, but despite it’s initial success, there are issues to overcome with it.

A team led by Georgia Tech are attempting to do just that.  They’re looking to train Watson to get better at returning answers to specific queries.

 

 

Why The Internet of Things will drive a Knowledge Revolution. — from linkedin.com by David Evans

Excerpt:

As these machines inevitably connect to the Internet, they will ultimately connect to each other so they can share, and collaborate on their own findings. In fact, in 2014 machines got their own ”World Wide Web” called RoboEarth, in which to share knowledge with one another. …
The implications of all of this are at minimum twofold:

  • The way we generate knowledge is going to change dramatically in the coming years.
  • Knowledge is about to increase at an exponential rate.

What we choose to do with this newfound knowledge is of course up to us. We are about to face some significant challenges at scales we have yet to experience.

 

 

Drone squad to be launched by Tokyo police — from bbc.com

Excerpt:

A drone squad, designed to locate and – if necessary – capture nuisance drones flown by members of the public, is to be launched by police in Tokyo.

 

 

An advance in artificial intelligence rivals human abilities — from todayonline.com by John Markoff

Excerpt:

NEW YORK — Computer researchers reported artificial-intelligence advances [on Dec 10] that surpassed human capabilities for a narrow set of vision-related tasks.

The improvements are noteworthy because so-called machine-vision systems are becoming commonplace in many aspects of life, including car-safety systems that detect pedestrians and bicyclists, as well as in video game controls, Internet search and factory robots.

 

 

Somewhat related:

Novo Nordisk, IBM Watson Health to create ‘virtual doctor’ — from wsj.com by Denise Roland
Software could dispense treatment advice for diabetes patients

Excerpt:

Novo Nordisk A/S is teaming up with IBM Watson Health, a division of International Business Machines Corp., to create a “virtual doctor” for diabetes patients that could dispense treatment advice such as insulin dosage.

The Danish diabetes specialist hopes to use IBM’s supercomputer platform, Watson, to analyze health data from diabetes patients to help them manage their disease.

 

 

Why Google’s new quantum computer could launch an artificial intelligence arms race — from washingtonpost.com

 

 

 

8 industries robots will completely transform by 2025 — from techinsider.io

 

 

 

Addendums on 12/17/15:

Russia and China are building highly autonomous killer robots — from businessinsider.com.au by Danielle Muoi

Excerpt:

Russia and China are creating highly autonomous weapons, more commonly referred to as killer robots, and it’s putting pressure on the Pentagon to keep up, according to US Deputy Secretary of Defense Robert Work. During a national-security forum on Monday, Work said that China and Russia are heavily investing in a roboticized army, according to a report from Defense One.

Your Algorithmic Self Meets Super-Intelligent AI — from techcrunch.com by Jarno M. Koponen

Excerpt:

At the same time, your data and personalized experiences are used to develop and train the machine learning systems that are powering the Siris, Watsons, Ms and Cortanas. Be it a speech recognition solution or a recommendation algorithm, your actions and personal data affect how these sophisticated systems learn more about you and the world around you.

The less explicit fact is that your diverse interactions — your likes, photos, locations, tags, videos, comments, route selections, recommendations and ratings — feed learning systems that could someday transform into superintelligent AIs with unpredictable consequences.

As of today, you can’t directly affect how your personal data is used in these systems

 

Addendum on 12/20/15:

 

Addendum on 12/21/15:

  • Facewatch ‘thief recognition’ CCTV on trial in UK stores — from bbc.com
    Excerpts (emphasis DSC):
    Face-recognition camera systems should be used by police, he tells me. “The technology’s here, and we need to think about what is a proportionate response that respects people’s privacy,” he says.

    “The public need to ask themselves: do they want six million cameras painted red at head height looking at them?

 

Addendum on 1/13/16:

 

From DSC:
This posting is meant to surface the need for debates/discussions, new policy decisions, and for taking the time to seriously reflect upon what type of future that we want.  Given the pace of technological change, we need to be constantly asking ourselves what kind of future we want and then to be actively creating that future — instead of just letting things happen because they can happen. (i.e., just because something can be done doesn’t mean it should be done.)

Gerd Leonhard’s work is relevant here.  In the resource immediately below, Gerd asserts:

I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.

I am also including some other relevant items here that bear witness to the increasingly rapid speed at which we’re moving now.


 

Redefining the relationship of man and machine: here is my narrated chapter from the ‘The Future of Business’ book (video, audio and pdf) — from futuristgerd.com by Gerd Leonhard

.

DigitalEthics-GerdLeonhard-Oct2015

 

 

Robot revolution: rise of ‘thinking’ machines could exacerbate inequality — from theguardian.com by Heather Stewart
Global economy will be transformed over next 20 years at risk of growing inequality, say analysts

Excerpt (emphasis DSC):

A “robot revolution” will transform the global economy over the next 20 years, cutting the costs of doing business but exacerbating social inequality, as machines take over everything from caring for the elderly to flipping burgers, according to a new study.

As well as robots performing manual jobs, such as hoovering the living room or assembling machine parts, the development of artificial intelligence means computers are increasingly able to “think”, performing analytical tasks once seen as requiring human judgment.

In a 300-page report, revealed exclusively to the Guardian, analysts from investment bank Bank of America Merrill Lynch draw on the latest research to outline the impact of what they regard as a fourth industrial revolution, after steam, mass production and electronics.

“We are facing a paradigm shift which will change the way we live and work,” the authors say. “The pace of disruptive technological innovation has gone from linear to parabolic in recent years. Penetration of robots and artificial intelligence has hit every industry sector, and has become an integral part of our daily lives.”

 

RobotRevolution-Nov2015

 

 

 

First genetically modified humans could exist within two years — from telegraph.co.uk by Sarah Knapton
Biotech company Editas Medicine is planning to start human trials to genetically edit genes and reverse blindness

Excerpt:

Humans who have had their DNA genetically modified could exist within two years after a private biotech company announced plans to start the first trials into a ground-breaking new technique.

Editas Medicine, which is based in the US, said it plans to become the first lab in the world to ‘genetically edit’ the DNA of patients suffering from a genetic condition – in this case the blinding disorder ‘leber congenital amaurosis’.

 

 

 

Gartner predicts our digital future — from gartner.com by Heather Levy
Gartner’s Top 10 Predictions herald what it means to be human in a digital world.

Excerpt:

Here’s a scene from our digital future: You sit down to dinner at a restaurant where your server was selected by a “robo-boss” based on an optimized match of personality and interaction profile, and the angle at which he presents your plate, or how quickly he smiles can be evaluated for further review.  Or, perhaps you walk into a store to try on clothes and ask the digital customer assistant embedded in the mirror to recommend an outfit in your size, in stock and on sale. Afterwards, you simply tell it to bill you from your mobile and skip the checkout line.

These scenarios describe two predictions in what will be an algorithmic and smart machine driven world where people and machines must define harmonious relationships. In his session at Gartner Symposium/ITxpo 2016 in Orlando, Daryl Plummer, vice president, distinguished analyst and Gartner Fellow, discussed how Gartner’s Top Predictions begin to separate us from the mere notion of technology adoption and draw us more deeply into issues surrounding what it means to be human in a digital world.

 

 

GartnerPredicts-Oct2015

 

 

Univ. of Washington faculty study legal, social complexities of augmented reality — from phys.org

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction—as well as potential discrimination—are bound to follow.

The Tech Policy Lab brings together faculty and students from the School of Law, Information School and Computer Science & Engineering Department and other campus units to think through issues of technology policy. “Augmented Reality: A Technology and Policy Primer” is the lab’s first official white paper aimed at a policy audience. The paper is based in part on research presented at the 2015 International Joint Conference on Pervasive and Ubiquitous Computing, or UbiComp conference.

Along these same lines, also see:

  • Augmented Reality: Figuring Out Where the Law Fits — from rdmag.com by Greg Watry
    Excerpt:
    With AR comes potential issues the authors divide into two categories. “The first is collection, referring to the capacity of AR to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability,” the researchers write. The second issue is display, which “raises a variety of complex issues ranging from possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling.”Current privacy law in the U.S. allows video and audio recording in areas that “do not attract an objectively reasonable expectation of privacy,” says Newell. Further, many uses of AR would be covered under the First Amendment right to record audio and video, especially in public spaces. However, as AR increasingly becomes more mobile, “it has the potential to record inconspicuously in a variety of private or more intimate settings, and I think these possibilities are already straining current privacy law in the U.S.,” says Newell.

 

Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech — from kqed.org by

Excerpt (emphasis DSC):

Our first Big Think comes from Stuart Russell. He’s a computer science professor at UC Berkeley and a world-renowned expert in artificial intelligence. His Big Think?

“In the future, moral philosophy will be a key industry sector,” says Russell.

Translation? In the future, the nature of human values and the process by which we make moral decisions will be big business in tech.

 

Life, enhanced: UW professors study legal, social complexities of an augmented reality future — from washington.edu by Peter Kelley

Excerpt:

But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction — as well as potential discrimination — are bound to follow.

 

An excerpt from:

UW-AR-TechPolicyPrimer-Nov2015

THREE: CHALLENGES FOR LAW AND POLICY
AR systems  change   human  experience   and,  consequently,   stand  to   challenge   certain assumptions  of  law  and  policy.  The  issues  AR  systems  raise  may  be  divided  into  roughly two  categories.  The  first  is  collection,  referring  to  the  capacity  of  AR  devices  to  record,  or  at  least register,  the people and  places around  the user.  Collection  raises obvious  issues of  privacy  but  also  less  obvious  issues  of  free  speech  and  accountability.  The  second  rough  category  is  display,  referring  to  the  capacity  of  AR  to  overlay  information over  people  and places  in  something  like  real-time.  Display  raises  a  variety  of  complex  issues  ranging  from
possible  tort  liability  should  the  introduction  or  withdrawal  of  information  lead  to  injury,  to issues   surrounding   employment   discrimination   or   racial   profiling.   Policymakers   and stakeholders interested in AR should consider what these issues mean for them.  Issues related to the collection of information include…

 

HR tech is getting weird, and here’s why — from hrmorning.com by guest poster Julia Scavicchio

Excerpt (emphasis DSC):

Technology has progressed to the point where it’s possible for HR to learn almost everything there is to know about employees — from what they’re doing moment-to-moment at work to what they’re doing on their off hours. Guest poster Julia Scavicchio takes a long hard look at the legal and ethical implications of these new investigative tools.  

Why on Earth does HR need all this data? The answer is simple — HR is not on Earth, it’s in the cloud.

The department transcends traditional roles when data enters the picture.

Many ethical questions posed through technology easily come and go because they seem out of this world.

 

 

18 AI researchers reveal the most impressive thing they’ve ever seen — from businessinsider.com by Guia Marie Del Prado,

Excerpt:

Where will these technologies take us next? Well to know that we should determine what’s the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.

“The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games.  That’s both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you’d be terrified.”

 

 

 

Algorithmic Economy: Powering the Machine-to-Machine Age Economic Revolution — from formtek.com by Dick Weisinger

Excerpts:

As technology advances, we are becoming increasingly dependent on algorithms for everything in our lives.  Algorithms that can solve our daily problems and tasks will do things like drive vehicles, control drone flight, and order supplies when they run low.  Algorithms are defining the future of business and even our everyday lives.

Sondergaard said that “in 2020, consumers won’t be using apps on their devices; in fact, they will have forgotten about apps. They will rely on virtual assistants in the cloud, things they trust. The post-app era is coming.  The algorithmic economy will power the next economic revolution in the machine-to-machine age. Organizations will be valued, not just on their big data, but on the algorithms that turn that data into actions that ultimately impact customers.”

 

 

Related items:

 

Addendums:

 

robots-saying-no

 

 

Addendum on 12/14/15:

  • Algorithms rule our lives, so who should rule them? — from qz.com by Dries Buytaert
    As technology advances and more everyday objects are driven almost entirely by software, it’s become clear that we need a better way to catch cheating software and keep people safe.
 

A college completion idea that’s so simple. Why aren’t we doing it? — from huffingtonpost.com by Brad Phillips

Excerpt (emphasis DSC):

This week’s White House “College Opportunity” summit will focus on an overlooked area with enormous potential for student success: K-12 and higher education working together to improve college completion. It sounds so simple and obvious. In fact many assume it’s already happening. After all both groups of educators share the same students, just at different points in their education careers. Why wouldn’t they share information about students and coordinate efforts to help students be successful?

The process of closely analyzing high school to college data is eye opening for both K-12 and college educators. Faculty discover that while they both may be calling a subject Algebra or English, what is taught and assigned can be very different, setting up students for a struggle.

In Southern California, high school teachers and college faculty members participating in English Curriculum Alignment Project (ECAP) shared years of transcript information Examining student performance over time, educators learned that what was taught in High School English did not align with what was expected in college English.

 

From DSC:
I’ll take that one step further and say that we need stronger continuums between K-12, higher ed, and the corporate/business world.  We need more efforts, conversations, mechanisms, tools, communities of practice, and platforms to collaborate with each other.  That’s what I try to at least scratch the surface on via this Learning Ecosystems blog — i.e., touching upon areas that involve the worlds of K-12, higher ed, and the corporate/business world. We need more collaborations/conversations along these lines.

 

 

Higher Education: New Models, New Rules — from educause.com by Louis Soares, Judith S. Eaton, and Burck Smith
What are the new rules that will accompany future new models in higher education? Three essays address this question by exploring state higher education policy, accreditation for non-institutional education, and the disaggregation of the current higher education model.

 

Educause-NewModelsNewRules-Oct72013

 

Excerpt from Burck’s essay:

Accordingly, the government spurs “supply” by paying for colleges and universities and spurs “demand” by paying for students. Accreditors determine who can receive these funds. All of this worked well for sixty years. Until, suddenly, it doesn’t.

 

Alive in the Swamp  — from nesta.org.uk by Michael Fullan and Katelyn Donnelly

Excerpt (emphasis and link below from DSC):

The authors argue that we should seek digital innovations that produce at least twice the learning outcome for half the cost of our current tools.  To achieve this, three forces need to come together. One is technology, the other pedagogy, and the third is change knowledge, or how to secure transformation across an entire school system.

The breakthrough in Alive in the Swamp is the development of an Index that will be of practical assistance to those charged with making these kinds of decisions at school, local and system level. Building on Fullan’s previous work, Stratosphere, the Index sets out the questions policymakers need to ask themselves not just about the technology at any given moment but crucially also about how it can be combined with pedagogy and knowledge about system change. Similarly, the Index should help entrepreneurs and education technology developers to consider particular features to build into their products to drive increased learning and achieve systemic impact.

The future will belong not to those who focus on the technology alone but to those who place it in this wider context and see it as one element of a wider system transformation. Fullan and Donnelly show how this can be done in a practical way.

.

 seriously-scary-graphic---daniel-christian-7-24-13

 

Also see:
.

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2018 | Daniel Christian