Some excerpts of this infographic:
We are hopefully creating the future that we want — i.e., creating the future of our dreams, not nightmares. The 14 items below show that technology is often waaay out ahead of us…and it takes time for other areas of society to catch up (such as areas that involve making policies, laws, and/or if we should even be doing these things in the first place).
Such reflections always make me ask:
Readers of this blog know that I’m generally pro-technology. But with the exponential pace of technological change, we need to slow things down enough to make wise decisions.
Google AI invents its own cryptographic algorithm; no one knows how it works — from arstechnica.co.uk by Sebastian Anthony
Neural networks seem good at devising crypto methods; less good at codebreaking.
Google Brain has created two artificial intelligences that evolved their own cryptographic algorithm to protect their messages from a third AI, which was trying to evolve its own method to crack the AI-generated crypto. The study was a success: the first two AIs learnt how to communicate securely from scratch.
IoT growing faster than the ability to defend it — from scientificamerican.com by Larry Greenemeier
Last week’s use of connected gadgets to attack the Web is a wake-up call for the Internet of Things, which will get a whole lot bigger this holiday season
With this year’s approaching holiday gift season the rapidly growing “Internet of Things” or IoT—which was exploited to help shut down parts of the Web this past Friday—is about to get a lot bigger, and fast. Christmas and Hanukkah wish lists are sure to be filled with smartwatches, fitness trackers, home-monitoring cameras and other wi-fi–connected gadgets that connect to the internet to upload photos, videos and workout details to the cloud. Unfortunately these devices are also vulnerable to viruses and other malicious software (malware) that can be used to turn them into virtual weapons without their owners’ consent or knowledge.
Last week’s distributed denial of service (DDoS) attacks—in which tens of millions of hacked devices were exploited to jam and take down internet computer servers—is an ominous sign for the Internet of Things. A DDoS is a cyber attack in which large numbers of devices are programmed to request access to the same Web site at the same time, creating data traffic bottlenecks that cut off access to the site. In this case the still-unknown attackers used malware known as “Mirai” to hack into devices whose passwords they could guess, because the owners either could not or did not change the devices’ default passwords.
How to Get Lost in Augmented Reality — from inverse.com by Tanya Basu; with thanks to Woontack Woo for this resource
There are no laws against projecting misinformation. That’s good news for pranksters, criminals, and advertisers.
Augmented reality offers designers and engineers new tools and artists and new palette, but there’s a dark side to reality-plus. Because A.R. technologies will eventually allow individuals to add flourishes to the environments of others, they will also facilitate the creation of a new type of misinformation and unwanted interactions. There will be advertising (there is always advertising) and there will also be lies perpetrated with optical trickery.
Two computer scientists-turned-ethicists are seriously considering the problematic ramifications of a technology that allows for real-world pop-ups: Keith Miller at the University of Missouri-St. Louis and Bo Brinkman at Miami University in Ohio. Both men are dismissive of Pokémon Go because smartphones are actually behind the times when it comes to A.R.
A very important question is who controls these augmentations,” Miller says. “It’s a huge responsibility to take over someone’s world — you could manipulate people. You could nudge them.”
Can we build AI without losing control over it? — from ted.com by Sam Harris
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.
Do no harm, don’t discriminate: official guidance issued on robot ethics — from theguardian.com
Robot deception, addiction and possibility of AIs exceeding their remits noted as hazards that manufacturers should consider
Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.
The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.
World’s first baby born with new “3 parent” technique — from newscientist.com by Jessica Hamzelou
It’s a boy! A five-month-old boy is the first baby to be born using a new technique that incorporates DNA from three people, New Scientist can reveal. “This is great news and a huge deal,” says Dusko Ilic at King’s College London, who wasn’t involved in the work. “It’s revolutionary.”
The controversial technique, which allows parents with rare genetic mutations to have healthy babies, has only been legally approved in the UK. But the birth of the child, whose Jordanian parents were treated by a US-based team in Mexico, should fast-forward progress around the world, say embryologists.
Scientists Grow Full-Sized, Beating Human Hearts From Stem Cells — from popsci.com by Alexandra Ossola
It’s the closest we’ve come to growing transplantable hearts in the lab
Of the 4,000 Americans waiting for heart transplants, only 2,500 will receive new hearts in the next year. Even for those lucky enough to get a transplant, the biggest risk is the their bodies will reject the new heart and launch a massive immune reaction against the foreign cells. To combat the problems of organ shortage and decrease the chance that a patient’s body will reject it, researchers have been working to create synthetic organs from patients’ own cells. Now a team of scientists from Massachusetts General Hospital and Harvard Medical School has gotten one step closer, using adult skin cells to regenerate functional human heart tissue, according to a study published recently in the journal Circulation Research.
Achieving trust through data ethics — from sloanreview.mit.edu
Success in the digital age requires a new kind of diligence in how companies gather and use data.
A few months ago, Danish researchers used data-scraping software to collect the personal information of nearly 70,000 users of a major online dating site as part of a study they were conducting. The researchers then published their results on an open scientific forum. Their report included the usernames, political leanings, drug usage, and other intimate details of each account.
A firestorm ensued. Although the data gathered and subsequently released was already publicly available, many questioned whether collecting, bundling, and broadcasting the data crossed serious ethical and legal boundaries.
In today’s digital age, data is the primary form of currency. Simply put: Data equals information equals insights equals power.
Technology is advancing at an unprecedented rate — along with data creation and collection. But where should the line be drawn? Where do basic principles come into play to consider the potential harm from data’s use?
“Data Science Ethics” course — from the University of Michigan on edX.org
Learn how to think through the ethics surrounding privacy, data sharing, and algorithmic decision-making.
About this course
As patients, we care about the privacy of our medical record; but as patients, we also wish to benefit from the analysis of data in medical records. As citizens, we want a fair trial before being punished for a crime; but as citizens, we want to stop terrorists before they attack us. As decision-makers, we value the advice we get from data-driven algorithms; but as decision-makers, we also worry about unintended bias. Many data scientists learn the tools of the trade and get down to work right away, without appreciating the possible consequences of their work.
This course focused on ethics specifically related to data science will provide you with the framework to analyze these concerns. This framework is based on ethics, which are shared values that help differentiate right from wrong. Ethics are not law, but they are usually the basis for laws.
Everyone, including data scientists, will benefit from this course. No previous knowledge is needed.
Science, Technology, and the Future of Warfare — from mwi.usma.edu by Margaret Kosal
We know that emerging innovations within cutting-edge science and technology (S&T) areas carry the potential to revolutionize governmental structures, economies, and life as we know it. Yet, others have argued that such technologies could yield doomsday scenarios and that military applications of such technologies have even greater potential than nuclear weapons to radically change the balance of power. These S&T areas include robotics and autonomous unmanned system; artificial intelligence; biotechnology, including synthetic and systems biology; the cognitive neurosciences; nanotechnology, including stealth meta-materials; additive manufacturing (aka 3D printing); and the intersection of each with information and computing technologies, i.e., cyber-everything. These concepts and the underlying strategic importance were articulated at the multi-national level in NATO’s May 2010 New Strategic Concept paper: “Less predictable is the possibility that research breakthroughs will transform the technological battlefield…. The most destructive periods of history tend to be those when the means of aggression have gained the upper hand in the art of waging war.”
Low-Cost Gene Editing Could Breed a New Form of Bioterrorism — from bigthink.com by Philip Perry
2012 saw the advent of gene editing technique CRISPR-Cas9. Now, just a few short years later, gene editing is becoming accessible to more of the world than its scientific institutions. This new technique is now being used in public health projects, to undermine the ability of certain mosquitoes to transmit disease, such as the Zika virus. But that initiative has had many in the field wondering whether it could be used for the opposite purpose, with malicious intent.
Back in February, U.S. National Intelligence Director James Clapper put out a Worldwide Threat Assessment, to alert the intelligence community of the potential risks posed by gene editing. The technology, which holds incredible promise for agriculture and medicine, was added to the list of weapons of mass destruction.
It is thought that amateur terrorists, non-state actors such as ISIS, or rouge states such as North Korea, could get their hands on it, and use this technology to create a bioweapon such as the earth has never seen, causing wanton destruction and chaos without any way to mitigate it.
What would happen if gene editing fell into the wrong hands?
Robot nurses will make shortages obsolete — from thedailybeast.com by Joelle Renstrom
By 2022, one million nurse jobs will be unfilled—leaving patients with lower quality care and longer waits. But what if robots could do the job?
Japan is ahead of the curve when it comes to this trend, given that its elderly population is the highest of any country. Toyohashi University of Technology has developed Terapio, a robotic medical cart that can make hospital rounds, deliver medications and other items, and retrieve records. It follows a specific individual, such as a doctor or nurse, who can use it to record and access patient data. Terapio isn’t humanoid, but it does have expressive eyes that change shape and make it seem responsive. This type of robot will likely be one of the first to be implemented in hospitals because it has fairly minimal patient contact, works with staff, and has a benign appearance.
Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.
Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.
Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.
IBM Watson’s latest gig: Improving cancer treatment with genomic sequencing — from techrepublic.com by Alison DeNisco
A new partnership between IBM Watson Health and Quest Diagnostics will combine Watson’s cognitive computing with genetic tumor sequencing for more precise, individualized cancer care.
Addendum on 11/1/16:
An open letter to Microsoft and Google’s Partnership on AI — from wired.com by Gerd Leonhard
In a world where machines may have an IQ of 50,000, what will happen to the values and ethics that underpin privacy and free will?
Dear Francesca, Eric, Mustafa, Yann, Ralf, Demis and others at IBM, Microsoft, Google, Facebook and Amazon.
The Partnership on AI to benefit people and society is a welcome change from the usual celebration of disruption and magic technological progress. I hope it will also usher in a more holistic discussion about the global ethics of the digital age. Your announcement also coincides with the launch of my book Technology vs. Humanity which dramatises this very same question: How will technology stay beneficial to society?
This open letter is my modest contribution to the unfolding of this new partnership. Data is the new oil – which now makes your companies the most powerful entities on the globe, way beyond oil companies and banks. The rise of ‘AI everywhere’ is certain to only accelerate this trend. Yet unlike the giants of the fossil-fuel era, there is little oversight on what exactly you can and will do with this new data-oil, and what rules you’ll need to follow once you have built that AI-in-the-sky. There appears to be very little public stewardship, while accepting responsibility for the consequences of your inventions is rather slow in surfacing.
For example, fast forward a few years from the technologies found in “The Video Call Center” and one could imagine some powerful means of collaborating from one’s living room:
Micro-credentials offer universities an opportunity to bridge skill gaps — from centerdigitaled.com by Tanya Roscorla
By working with employers, universities can help students of all ages learn skills that industry leaders need.
Higher education leaders are pondering how to make bite-sized, low-cost learning opportunities available to students in different ways.
Working adults who change jobs and careers frequently often don’t need to go through an entire degree program to learn different skills. However, they do need a flexible way to earn credentials that are recognized by employers and that demonstrate their ability to apply the skills they learn, said David Schejbal, dean of continuing education, outreach and e-learning at University of Wisconsin-Extension. University micro-credentials can help fill that role.
Six universities have been working with employers to find out what skills they need their employees to have, including the Georgia Institute of Technology, University of California Davis Extension, University of California Irvine Extension, University of Wisconsin-Extension, University of Washington and University of California, Los Angeles.
As a result of collaborating with industry, these universities created short courses and certification programs for the University Learning Store that launched last week. These courses fall into three categories: power skills, technical skills and career advancement skills. Power skills used to be called “soft skills” and include communication, collaboration and critical thinking.
Here’s how Maine students will have access to courses at 500 colleges — from bangordailynews.com by Christopher Cousins
AUGUSTA, Maine — Maine college students will soon have seamless access to hundreds of higher education courses in 35 states thanks to a new program to which the state has just been accepted.
The New England Board of Higher Education announced Monday that Maine has been accepted to the New England State Authorization Reciprocity Agreement (N-SARA). Here are the details:
What is SARA?
SARA is a 36-state coalition through which colleges and universities, as well as individual students, can access courses at institutions in other participating states. Reciprocally, Maine institutions also will be allowed to share their online offerings in other states in the coalition. Maine passed a law earlier this year to allow the state’s application to the coalition.
There are about 500 higher education institutions participating in the program nationally.
Higher Education needs a new way for states to oversee the delivery of postsecondary distance education.
The current process is too varied among the states to assure consistent consumer protection, too cumbersome and expensive for institutions that seek to provide education across state borders, and too fragmented to support our country’s architecture for quality assurance in higher education — the quality assurance “triad” of accrediting agencies, the federal government, and the states.
A new, voluntary process of state oversight of distance education has been created to redress these problems. The State Authorization Reciprocity Agreement is a voluntary agreement among its member states and U.S. territories that establishes comparable national standards for interstate offering of postsecondary distance-education courses and programs. It is intended to make it easier for students to take online courses offered by postsecondary institutions based in another state.
The State Authorization Reciprocity Agreement (SARA) establishes a state-level reciprocity process that will support the nation in its efforts to increase the educational attainment of its people by making state authorization:
Florida Universities take system approach in addressing growth of online — from campustechnology.com by Dian Schaffhauser
The Board of Governors for the State University System of Florida is putting final touches on a strategic plan for online education. The idea is to create a framework around which all 10 institutions in the system with online programs can pool their “collective talents and resources toward a common purpose” — helping Florida citizens earn credentials that will “improve their lives, lead to new discoveries and advanced Florida’s economy.” The plan was first begun a year ago when the board created a task force to examine how the state could better meet workforce needs through online education and increase effectiveness while reducing costs.
The goal of the “2025 Strategic Plan for Online Education” is intended to guide development and implementation of system policies and legislative budget requests related to online education with a focus on three primary elements: quality, access and affordability.
Embracing failure to spur success: A new collaborative innovation model — from educause.com by Kim Wilcox and Edward Ray
The implied message is clear: We’d prefer not to talk about what isn’t working at the postsecondary level.
We’re in a competitive sector, and there is misplaced pressure on all higher education institutions to achieve top placement in U.S. News & World Report and other annual rankings, regardless of whether or not those rankings make any sense. We all feel significant pressure to make sure that our constituents—from board members to faculty to parents to legislators—are happy with the direction of the institution. And we also know that those constituents can be impatient in waiting for substantive change to produce positive results. Honest discourse on new initiatives that seem unproductive or in need of modification is likely to lead to unpleasant conversations that few of us would relish.
This is not the way to foster innovation and improvement in higher education. The best innovators in the world typically follow the mantra that failure is acceptable, helpful, and sometimes even necessary to ultimately achieving an objective. Many of the products we rely on today, from Post-it Notes to pacemakers, resulted from mistakes or failures in the search for other innovations. And just about any founder of a successful Silicon Valley start-up has a track record of ventures that failed.
Successful innovation requires experimentation and learning from failure.
At the University of California, Riverside, and Oregon State University, we are engaged in one effort to achieve these goals: the University Innovation Alliance. The UIA is a consortium of eleven major public research universities that are working together to identify new solutions to challenges found throughout the higher education community, and then to share information about failures and successful solutions among institutions.
Some images that are along these lines:
Cross-college collaboration — from insidehighered.com by Megan Rogers
Faced with increasingly tight budgets, liberal arts colleges are looking to share resources to reduce costs and expand programs. But when the end goal is collaboration and not a merger, how should administrators decide which services are appropriate to share?
St. Olaf and Carleton Colleges, both liberal arts colleges in Northfield, Minn., have received a $1.4 million grant from the Andrew W. Mellon Foundation to increase collaboration over the next four years, but are drawing the line at sharing career services departments. And it’s hard to imagine the colleges collaborating in areas where they are competitors, such as fund-raising or admissions, St. Olaf President David Anderson said.
In order to survive within higher ed — and if things migrate to more of a team-based approach to content creation and delivery — I’ve often wondered if the following will occur:
Microsoft joins Degreed’s crusade to ‘jailbreak the degree’ – from gigaom.com by Ki Mae Heussner
Degreed, a San Francisco startup taking on traditional degrees and diplomas with a digital credential that reflects lifelong learning, has recruited its first corporate partner to its corner.
This week the startup said it will launch a partnership with Microsoft Virtual Academy, the tech giant’s online IT training site, which will give students who complete the program’s classes a way to display their achievements on Degreed.
AT&T and Georgia Tech.
Google and edX.
Microsoft and Degreed.
IBM sending Watson to school and partnering with 1000+ universities (see here and here).
JP Morgan and University of Delaware (see this addendum from 10/7/13)
Is there a new trend forming here?
Use the form below to search the site:
Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!
A few highly recommended friends...