The world of learning and development is on the cusp of change. One of the most promising—and prominent—paradigms comes from neuroscience. Go to any conference today in the workplace learning field and there are numerous sessions on neuroscience and brain-based learning. Vendors sing praises to neuroscience. Articles abound. Blog posts proliferate.
But where are we on the science? Have we gone too far? Is this us, the field of workplace learning, once again speeding headlong into a field of fad and fantasy? Or are we spot-on to see incredible promise in bringing neuroscience wisdom to bear on learning practice? In this article, I will describe where we are with neuroscience and learning—answering that question as it relates to this point in time—in January of 2016.
…
Taken together, these conclusions are balanced between the promise of neuroscience and the healthy skepticism of scientists. Note however, that when these researchers talk about the benefits of neuroscience for learning, they see neuroscience applications as happening in the future (perhaps the near future). They do NOT claim that neuroscience has already created a body of knowledge that is applicable to learning and education.
… Conclusion The field of workplace learning—and the wider education field—have fallen under the spell of neuroscience (aka brain-science) recommendations. Unfortunately, neuroscience has not yet created a body of proven recommendations.While offering great promise for the future, as of this writing—in January 2016—most learning professionals would be better off relying on proven learning recommendations from sources like Brown, Roediger, and McDaniel’s book Make It Stick; by Benedict Carey’s book How We Learn; and by Julie Dirksen’s book Design for How People Learn.
As learning professionals, we must be more skeptical of neuroscience claims. As research and real-world experience has shown, such claims can persuade us toward ineffective learning designs and unscrupulous vendors and consultants.
Our trade associations and industry thought leaders need to take a stand as well. Instead of promoting neuroscience claims, they ought to voice a healthy skepticism.
From DSC: This posting is meant to surface the need for debates/discussions, new policy decisions, and for taking the time to seriously reflect upon what type of future that we want. Given the pace of technological change, we need to be constantly asking ourselves what kind of future we want and then to be actively creating that future — instead of just letting things happen because they can happen. (i.e., just because something can be done doesn’t mean it should be done.)
Gerd Leonhard’s work is relevant here. In the resource immediately below, Gerd asserts:
I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.
I am also including some other relevant items here that bear witness to the increasingly rapid speed at which we’re moving now.
A “robot revolution” will transform the global economy over the next 20 years, cutting the costs of doing business but exacerbating social inequality, as machines take over everything from caring for the elderly to flipping burgers, according to a new study.
As well as robots performing manual jobs, such as hoovering the living room or assembling machine parts, the development of artificial intelligence means computers are increasingly able to “think”, performing analytical tasks once seen as requiring human judgment.
In a 300-page report, revealed exclusively to the Guardian, analysts from investment bank Bank of America Merrill Lynch draw on the latest research to outline the impact of what they regard as a fourth industrial revolution, after steam, mass production and electronics.
“We are facing a paradigm shift which will change the way we live and work,” the authors say. “The pace of disruptive technological innovation has gone from linear to parabolic in recent years. Penetration of robots and artificial intelligence has hit every industry sector, and has become an integral part of our daily lives.”
Humans who have had their DNA genetically modified could exist within two years after a private biotech company announced plans to start the first trials into a ground-breaking new technique.
Editas Medicine, which is based in the US, said it plans to become the first lab in the world to ‘genetically edit’ the DNA of patients suffering from a genetic condition – in this case the blinding disorder ‘leber congenital amaurosis’.
Gartner predicts our digital future— from gartner.com by Heather Levy Gartner’s Top 10 Predictions herald what it means to be human in a digital world.
Excerpt:
Here’s a scene from our digital future: You sit down to dinner at a restaurant where your server was selected by a “robo-boss” based on an optimized match of personality and interaction profile, and the angle at which he presents your plate, or how quickly he smiles can be evaluated for further review. Or, perhaps you walk into a store to try on clothes and ask the digital customer assistant embedded in the mirror to recommend an outfit in your size, in stock and on sale. Afterwards, you simply tell it to bill you from your mobile and skip the checkout line.
These scenarios describe two predictions in what will be an algorithmic and smart machine driven world where people and machines must define harmonious relationships. In his session at Gartner Symposium/ITxpo 2016 in Orlando, Daryl Plummer, vice president, distinguished analyst and Gartner Fellow, discussed how Gartner’s Top Predictions begin to separate us from the mere notion of technology adoption and draw us more deeply into issues surrounding what it means to be human in a digital world.
But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction—as well as potential discrimination—are bound to follow.
The Tech Policy Lab brings together faculty and students from the School of Law, Information School and Computer Science & Engineering Department and other campus units to think through issues of technology policy. “Augmented Reality: A Technology and Policy Primer” is the lab’s first official white paper aimed at a policy audience. The paper is based in part on research presented at the 2015 International Joint Conference on Pervasive and Ubiquitous Computing, or UbiComp conference.
Along these same lines, also see:
Augmented Reality: Figuring Out Where the Law Fits— from rdmag.com by Greg Watry Excerpt:
With AR comes potential issues the authors divide into two categories. “The first is collection, referring to the capacity of AR to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability,” the researchers write. The second issue is display, which “raises a variety of complex issues ranging from possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling.”Current privacy law in the U.S. allows video and audio recording in areas that “do not attract an objectively reasonable expectation of privacy,” says Newell. Further, many uses of AR would be covered under the First Amendment right to record audio and video, especially in public spaces. However, as AR increasingly becomes more mobile, “it has the potential to record inconspicuously in a variety of private or more intimate settings, and I think these possibilities are already straining current privacy law in the U.S.,” says Newell.
Our first Big Think comes from Stuart Russell. He’s a computer science professor at UC Berkeley and a world-renowned expert in artificial intelligence. His Big Think?
“In the future, moral philosophy will be a key industry sector,” says Russell.
Translation? In the future, the nature of human values and the process by which we make moral decisions will be big business in tech.
But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction — as well as potential discrimination — are bound to follow.
An excerpt from:
THREE: CHALLENGES FOR LAW AND POLICY
AR systems change human experience and, consequently, stand to challenge certain assumptions of law and policy. The issues AR systems raise may be divided into roughly two categories. The first is collection, referring to the capacity of AR devices to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability. The second rough category is display, referring to the capacity of AR to overlay information over people and places in something like real-time. Display raises a variety of complex issues ranging from
possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling. Policymakers and stakeholders interested in AR should consider what these issues mean for them. Issues related to the collection of information include…
Technology has progressed to the point where it’s possible for HR to learn almost everything there is to know about employees — from what they’re doing moment-to-moment at work to what they’re doing on their off hours. Guest poster Julia Scavicchio takes a long hard look at the legal and ethical implications of these new investigative tools.
Why on Earth does HR need all this data? The answer is simple — HR is not on Earth, it’s in the cloud.
The department transcends traditional roles when data enters the picture.
Many ethical questions posed through technology easily come and go because they seem out of this world.
Where will these technologies take us next? Well to know that we should determine what’s the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.
…
“The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games. That’s both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you’d be terrified.”
As technology advances, we are becoming increasingly dependent on algorithms for everything in our lives. Algorithms that can solve our daily problems and tasks will do things like drive vehicles, control drone flight, and order supplies when they run low. Algorithms are defining the future of business and even our everyday lives.
…
Sondergaard said that “in 2020, consumers won’t be using apps on their devices; in fact, they will have forgotten about apps. They will rely on virtual assistants in the cloud, things they trust. The post-app era is coming. The algorithmic economy will power the next economic revolution in the machine-to-machine age. Organizations will be valued, not just on their big data, but on the algorithms that turn that data into actions that ultimately impact customers.”
Robots are learning to say “no” to human orders — from quartz.com by Kit Eaton Excerpt:
It may seem an obvious idea that a robot should do precisely what a human orders it to do at all times. But researchers in Massachusetts are trying something that many a science fiction movie has already anticipated: They’re teaching robots to say “no” to some instructions. For robots wielding potentially dangerous-to-humans tools on a car production line, it’s pretty clear that the robot should always precisely follow its programming. But we’re building more-clever robots every day and we’re giving them the power to decide what to do all by themselves. This leads to a tricky issue: How exactly do you program a robot to think through its orders and overrule them if it decides they’re wrong or dangerous to either a human or itself? This is what researchers at Tufts University’s Human-Robot Interaction Lab are tackling, and they’ve come up with at least one strategy for intelligently rejecting human orders.
Addendum on 12/14/15:
Algorithms rule our lives, so who should rule them? — from qz.com by Dries Buytaert As technology advances and more everyday objects are driven almost entirely by software, it’s become clear that we need a better way to catch cheating software and keep people safe.
Telepresence robots to beam psychologists into schools— from zdnet.com by Greg Nichols Researchers in Utah are experimenting with robots to solve a pressing problem: There aren’t enough pediatric psychologists to go around.
Excerpt:
Researchers in Utah are using an inexpensive robotic platform to help teachers in rural areas implement programs for children with special needs.
It’s another example of the early adoption of telepresence robots by educators and service providers, which I’ve written about here before. While offices are coming around to telepresence solutions for remote workers, teachers and school administrators seem to be readily embracing the technology, which they see as a way to maximize limited resources while bringing needed services to students.
MOOCs – or Massive Open Online Courses – are picking up momentum in popularity – at least in terms of initial enrollment.
Unlike regular college/ university courses, MOOCs can attract many thousands of enrollees around the world. They can come in the form of active course sessions with participant interaction, or as archived content for self-paced study. MOOCs can be free, or there can be a charge – either on a subscription basis or a one-time charge. Free MOOCs sometimes have a paid “verified certificate” option. There are now thousands of MOOCs available worldwide from several hundred colleges, universities and other institutions of higher learning. For your convenience, we’ve compiled a list of 50 of the most popular MOOCs, based on enrollment figures for all sessions of a course. The ranking is based on filtering enrollment data for 185 free MOOCs on various elearning platforms.
NEW YORK — The New York Times is collaborating with Google to launch a virtual reality project that includes a free app to view films on devices like smartphones.
The newspaper publisher said Tuesday the NYT VR app that will be available for download in both iOS and Google Play app stores starting Nov. 5. The first virtual-reality film in the project is titled “The Displaced” and follows the experiences of child refugees from south Sudan, eastern Ukraine and Syria.
Google will supply cardboard viewers that allow customers to view a three-dimensional version of the films. More than 1 million of the viewers will be distributed to home delivery subscribers early next month, and digital subscribers will receive emails with details on how to get the free viewers.
On their way to this month’s 70th United Nation’s General Assembly, the organization’s annual high-level meeting in New York, diplomats and world leaders will pass by a makeshift glass structure—both a glossy multi-media hub, and a gateway to an entirely different world.
The hub uses virtual reality to allow the UN attendees to see Jordan’s Zaatari camp for Syrian refugees through the eyes of a little girl. And, by using an immersive video portal, which will launch later this week, they will have the opportunity to have face-to-face conversations with residents of the camp.
The effort aims to put a human face on the high-level deliberations about the refugee crisis, which will likely dominate many conversations at the United Nations General Assembly (UNGA). UN Secretary General Ban Ki-Moon has called on the meeting to be “one of compassion, prevention and, above all, action.”
From DSC: VR-based apps have a great deal of potential to develop and practice greater empathy. See these related postings:
When it comes to virtual reality, the University of Maryland, Baltimore County is going for full immersion.
Armed with funding from the National Science Foundation, the university is set to build a virtual reality “environment” that’s designed to help researchers from different fields. It’s called PI2.
In the 15-by-20-foot room, stepping into virtual reality won’t necessarily require goggles.
A visualization wall at the University of Illinois at Chicago’s Electronic Visualization Lab.
UMBC officials say their project will be similar to this.(Photo courtesy of Planar)
Now you’re ready to turn your class into an immersive game, and everything you need is right here. With the help of these resources, you can develop your own gameful class, cook up a transmedia project, design a pervasive game or create your very own [Augmented Reality Game] ARG. Games aside, these links are useful for all types of creative learning projects. In most cases, what is on offer is free and/or web based, so only your imagination will be taxed.
If augmented reality could be a shared experience, it could change the way we will use the technology.
Something along these lines is currently in development at a Microsoft laboratory run by Jaron Lanier, one of the pioneers of VR since the 1980s through his company VPL Research. The project, called Comradre, allows multiple users to share virtual- and augmented-reality experiences, reports MIT Technology Review.
Because virtual reality takes place in a fully digital environment, it is not hugely difficult to put multiple users into the same virtual instance at the same time, wirelessly synced across multiple headsets.
vrfavs.com— some serious VR-related resources for you. Note: There are some NSFW items on there; so this is not for kids.
Together, virtual reality and augmented reality are expected to generate about $150 billion in revenue by the year 2020.
Of that staggering sum, according to data released today by Manatt Digital Media, $120 billion is likely to come from sales of augmented reality—with the lion’s share comprised of hardware, commerce, data, voice services, and film and TV projects—and $30 billion from virtual reality, mainly from games and hardware.
The report suggests that the major VR and AR areas that will be generating revenue fall into one of three categories: Content (gaming, film and TV, health care, education, and social); hardware and distribution (headsets, input devices like handheld controllers, graphics cards, video capture technologies, and online marketplaces); and software platforms and delivery services (content creation tools, capture, production, and delivery software, video game engines, analytics, file hosting and compression tools, and B2B and enterprise uses).
Talking about augmented reality technology in teaching and learning the first thing that comes to mind is this wonderful app called Aurasma. Since its release a few years ago, Aurasma gained so much in popularity and several teachers have already embraced it within their classrooms. For those of you who are not yet familiar with how Aurasma works and how to use in it in your class, this handy guide from Apple in Education is a great resource to start with.
The Oculus Touch virtual reality (VR) controllers finally have their first full videogames. A handful of titles were confirmed to support the kit back at the Oculus Connect 2 developer conference in September. But still one of the most impressive showcases of what these position-tracked devices can do exists in Oculus VR’s original tech demo, Toybox. [On 10/13/15], Oculus VR itself has released a new video that shows off what players are able to do within the software.
Much like sketching the first few lines on a blank canvas, the earliest prototypes of a VR project is an exciting time for fun and experimentation. Concepts evolve, interactions are created and discarded, and the demo begins to take shape.Competing with other 3D Jammers around the globe, Swedish game studio Pancake Storm has shared their #3DJam progress on Twitter, with some interesting twists and turns along the way. Pancake Storm started as a secondary school project for Samuel Andresen and Gabriel Löfqvist, who want to break into the world of VR development with their project, tentatively dubbed Wheel Smith and the Willchair.
Recently I learned about a new feature called Virtual Field Trips. In a partnership with 360 Cities, NearPod now gives teachers and students the opportunity to view pristine locations like the Taj Mahal, the Golden Gate Bridge, and The Great Wall of China. You can view famous architecture, famous artifacts, and even different planets! Virtual Field Trips are a great addition to any classroom.
Western University of Health Sciences in Pomona, Calif., has opened a first-of-its-kind virtual reality learning center that’s been designed to allow students from every program—dentistry, osteopathic medicine, veterinary medicine, physical therapy, and nursing—to learn through VR.
The Virtual Reality Learning Center currently houses four different VR technologies: the two zSpace displays, the Anatomage Virtual Dissection Table, the Oculus Rift, and Stanford anatomical models on iPad.
Robert W. Hasel, D.D.S., associate dean of simulation, immersion & digital learning at Western, says VR gives anatomical science teachers the ability to view and interact with anatomy in a way never before experienced. The virtual dissection table allows students to rotate the human body in 360 degrees, take it apart, identify specific structures, study individual systems, look at multiple views at the same time, take a trip inside the body, and look at holograms.
——————————————
——————————————
Addendum on 10/20/15:
Can Virtual Reality Replace the Cadaver Lab?— from centerdigitaled.com by Justine Brown Colleges are starting to use virtual reality platforms to augment or replace cadaver labs, saving universities hundreds of thousands of dollars.
The Core Problem
Why is teaching so stressful? Teaching isn’t a job, it’s at least three jobs:
Lesson planner: design and deliver creative, meaningful learning experiences
Student caretaker: you can listen to, interact with, and inspire kids of all types (not just the ones you like)
Organizer: Your files are labeled, grades are up to date, and desk is cleared
You’re probably really good at (and really enjoy) one of these jobs. You might even have two bright spots. But I’ve never met someone who excels in all three roles. As a result, you always have that nagging feeling that you’re not doing your job well.
Reflections from DSC: As we’re talking about stress and change here, the topic of resilience comes to my mind. (This is also most likely due to my currently teaching a First Year Seminar (FYS) course at Calvin College, where we recently covered this very topic.) From that FYS course, I wanted to mention thatPBS.org offers some further thoughts and resources on resilience. I also want to pass along some of the (healthier) coping strategies that folks can use:
Active coping – doing something active to alleviate stress such as talking to the person causing the stress, or problem solving for solutions
Emotional support – seeking a way to balance emotions primarily through sharing emotions with others
Instrumental support – seeking out professional or expert advise on the situation
Positive reframing – seeing the glass as half full (vs. half empty)
Planning – reprioritizing responsibilities to keep stress more manageable
Humor – laughter at the situation if appropriate, but more effective when combined with more action-based coping
Acceptance – if a situation is unchangeable, being able to accept it allows you to move forward in a positive direction to take appropriate action to help cope
Religion – prayer, scripture, reassurance from God’s promises
Also, for further information re: focusing on your strengths and obtaining maximum impact from them, seethe work of Marcus Buckingham.
Also see Ian Byrd’s related articles to being a healthier teacher:
What kind of boss hires a thwarted actress for a business-to-business software startup? Stewart Butterfield, Slack’s 42-year-old cofounder and CEO, whose estimated double-digit stake in the company could be worth $300 million or more. He’s the proud holder of an undergraduate degree in philosophy from Canada’s University of Victoria and a master’s degree from Cambridge in philosophy and the history of science.
“Studying philosophy taught me two things,” says Butterfield, sitting in his office in San Francisco’s South of Market district, a neighborhood almost entirely dedicated to the cult of coding. “I learned how to write really clearly. I learned how to follow an argument all the way down, which is invaluable in running meetings. And when I studied the history of science, I learned about the ways that everyone believes something is true–like the old notion of some kind of ether in the air propagating gravitational forces–until they realized that it wasn’t true.”
…
And he’s far from alone. Throughout the major U.S. tech hubs, whether Silicon Valley or Seattle, Boston or Austin, Tex., software companies are discovering that liberal arts thinking makes them stronger. Engineers may still command the biggest salaries, but at disruptive juggernauts such as Facebook and Uber, the war for talent has moved to nontechnical jobs, particularly sales and marketing. The more that audacious coders dream of changing the world, the more they need to fill their companies with social alchemists who can connect with customers–and make progress seem pleasant.
Addendum on 8/7/15:
STEM Study Starts With Liberal Arts— from forbes.com by Chris Teare Excerpt (emphasis DSC):
Much has been made, especially by the Return on Investment crowd, of the value of undergraduate study in the so-called STEM fields: Science, Technology, Engineering and Mathematics. Lost in the conversation is the way the true liberal arts underpin such study, often because the liberal arts are inaccurately equated solely with the humanities. From the start, the liberal arts included math and science, something I learned firsthand at St. John’s College.
This topic is especially on my mind since reading the excellent article George Anders has written for Forbes: “That ‘Useless’ Liberal Arts Degree Has Become Tech’s Hottest Ticket” In this context, understanding the actual origin and purposes of the liberal arts is all the more valuable.
From DSC: Many times we don’t want to hear news that could be troubling in terms of our futures. But we need to deal with these trends now or face the destabilization that Harold Jarche mentions in his posting below.
The topics found in the following items should be discussed in courses involving economics, business, political science, psychology, futurism, engineering, religion*, robotics, marketing, the law/legal affairs and others throughout the world. These trends are massive and have enormous ramifications for our societies in the not-too-distant future.
* When I mention religion classes here, I’m thinking of questions such as :
What does God have in mind for the place of work in our lives?
Is it good for us? If so, why or why not?
How might these trends impact one’s vocation/calling?
…and I’m sure that professors who teach faith/
religion-related courses can think of other questions to pursue
One of the greatest issues that will face Canada, and many developed countries in the next decade will be wealth distribution. While it does not currently appear to be a major problem, the disparity between rich and poor will increase. The main reason will be the emergence of a post-job economy. The ‘job’ was the way we redistributed wealth, making capitalists pay for the means of production and in return creating a middle class that could pay for mass produced goods. That period is almost over. From self-driving vehicles to algorithms replacing knowledge workers, employment is not keeping up with production. Value in the network era is accruing to the owners of the platforms, with companies such as Instagram reaching $1 billion valuations with only 13 employees.
…
The emerging economy of platform capitalism includes companies like Amazon, Facebook, Google, and Apple. These giants combined do not employ as many people as General Motors did. But the money accrued by them is enormous and remains in a few hands. The rest of the labour market has to find ways to cobble together a living income.Hence we see many people willing to drive for a company like Uber in order to increase cash-flow. But drivers for Uber have no career track. The platform owners get richer, but the drivers are limited by finite time. They can only drive so many hours per day, and without benefits. At the same time, those self-driving cars are poised to replace all Uber drivers in the near future. Standardized work, like driving a vehicle, has little future in a world of nano-bio-cogno-techno progress.
Value in the network era is accruing to the owners of the platforms, with companies such as Instagram reaching $1 billion valuations with only 13 employees.
For the past century, the job was the way we redistributed wealth and protected workers from the negative aspects of early capitalism. As the knowledge economy disappears, we need to re-think our concepts of work, income, employment, and most importantly education. If we do not find ways to help citizens lead productive lives, our society will face increasing destabilization.
Also see:
Will artificial intelligence and robots take your marketing job? — from by markedu.com by Technology will overtake jobs to an extent and at a rate we have not seen before. Artificial intelligence is threatening jobs even in service and knowledge intensive sectors. This begs the question: are robots threatening to take your marketing job?
Excerpt:
What exactly is a human job?
The benefits of artificial intelligence are obvious. Massive productivity gains while a new layer of personalized services from your computer – whether that is a burger robot or Dr. Watson. But artificial intelligence has a bias. Many jobs will be lost.
A few years ago a study from the University of Oxford got quite a bit of attention. The study said that 47 percent of the US labor market could be replaced by intelligent computers within the next 20 years.
The losers are a wide range of job categories within the administration, service, sales, transportation and manufacturing.
…
Before long we should – or must – redefine what exactly a human job is and the usefulness of it. How we as humans can best complement the extraordinary capabilities of artificial intelligence.
This development is expected to grow fast. There are different predictions about the timing, but by 2030 there will be very few tasks that only a human can solve.
From DSC: After reviewing the two items below, I think you will agree that there is great potential in the future of virtual reality — and the new affordances it will bring with it. For example, if we use it wisely, virtual reality could help us raise cultural intelligence, promote empathy, and reduce racism.
Beyond playing entertaining tricks on people’s perceptions, VR has the potential to promote a better understanding of what it’s like to be someone else – a refugee in a war zone, for example. Studying the effects of those uses, and the psychological effects of VR use in general, has always been the main focus of the lab. Bailenson listed how several of his studies that focused on empathy – for example, making someone visually impaired through VR– yielded results that demonstrated the VR experience made participants more altruistic in real life.
…
“The idea is, I truly believe VR is a good tool to teach you about yourself and to teach you empathy,” said Bailenson. “We want to know how robust that effect is, how long-lasting, because I can see this becoming a tool we all use.”
Fusing art, culture, and retail with virtual reality, augmented reality, and themed architecture and design, each complex will include an interactive museum, a virtual zoo and aquarium, a digital art gallery, a live entertainment stage, an immersive movie theater, and themed experience retail.
… “With virtual reality we can put you in the African savannah or fly you into outer space,” Christopher says. “This completely changes the idea of an old-fashioned museum by allowing kids to experience prehistoric dinosaurs or legendary creatures as we develop new experiences that keep them coming back for more. We’ll combine education and entertainment into one destination that’s always evolving.”
I’ve been reading Make it Stick: The Science of Successful Learning by Peter Brown and Henry Roediger (Harvard University Press, 2014). What a great book! It provides a whole load of useful tips for learners, teachers and trainers based on solid research.
…
Finishing this book coincides with The Debunker Club’s Debunk Learning Styles Month. And learning styles really do need debunking, not because we, as learners, don’t have preferences, but because there is no model out there which has been proven to be genuinely helpful in predicting learner performance based on their preferences.
Strength of Evidence Against
The strength of evidence against the use of learning styles is very strong. To put it simply, using learning styles to design or deploy learning is not likely to lead to improved learning effectiveness. While it may be true that learners have different learning preferences, those preference are not likely to be a good guide for learning. The bottom line is that when we design learning, there are far better heuristics to use than learning styles.
…
The weight of evidence at this time suggests that learning professionals should avoid using learning styles as a way to design their learning events. Still, research has not put the last nail in the coffin of learning styles. Future research may reveal specific instances where learning-style methods work. Similarly, learning preferences may be found to have long-term motivational effects.
There are fewer buzzwords in the elearning industry that result in a greater division than “learning style”. I know from experience. There have been posts on this site related to the topic which resulted in a few passionate comments (such as this one).
…
As such, my intent isn’t to discuss learning styles. Everyone has their mind made up already. It’s time to move the discussion along.
Learner Preference & Motivation If we bring the conversation “up” a level, we all ultimately agree that every learner has preferences and motivation.No need to cite studies for this concept, just think about yourself for a moment.
You enjoy certain things because you prefer them over others.
You do certain things because you are motivated to do so.
In the same respect, people prefer to learn information in a particular way. They also find some methods of learning more motivating than others. Whether you attribute this to learning styles or not is completely up to you.
Their first conclusion was that learners do indeed differ from one another. For example, some learners may have more ability, more interest, or more background than their classmates. Second, students do express preferences for how they like information to be presented to them… Third, after a careful analysis of the literature, the researchers found no evidence showing that people do in fact learn better when an instructor tailors their teaching style to mesh with their preferred learning style.
The idea of matching lessons to learning styles may be a fashionable trend that will go out of style itself. In the meantime, what are teachers and trainers to do? My advice is to leave the arguments to the academics. Here are some common-sense guidelines in planning a session of learning.
Follow your instincts. If you’re teaching music or speech, for example, wouldn’t auditory-based lessons make the most sense? You wouldn’t teach geography with lengthy descriptions of a coastline’s contours when simply showing a map would capture the essence in a heartbeat, right?
Since people clearly express learning style preferences, why not train them in their preferred style? If you give them what they want, they’ll be much more likely to stay engaged and expand their learning.
Question: What does cognitive science tell us about the existence of visual, auditory, and kinesthetic learners and the best way to teach them?
The idea that people may differ in their ability to learn new material depending on its modality—that is, whether the child hears it, sees it, or touches it—has been tested for over 100 years. And the idea that these differences might prove useful in the classroom has been around for at least 40 years.
What cognitive science has taught us is that children do differ in their abilities with different modalities, but teaching the child in his best modality doesn‘t affect his educational achievement. What does matter is whether the child is taught in the content‘s best modality. All students learn more when content drives the choice of modality.In this column, I will describe some of the research on matching modality strength to the modality of instruction. I will also address why the idea of tailoring instruction to a student‘s best modality is so enduring—despite substantial evidence that it is wrong.
From DSC: Given the controversies over the phrase “learning styles,” I like to use the phrase “learning preferences” instead. Along these lines, I think our goal as teachers, trainers, professors, SME’s should be to make learning enjoyable — give people more choice and more control. Present content in as many different formats as possible. Give them multiple pathways to meet the learning goals and objectives. If we do that, learning can be more enjoyable and the engagement/motivation levels should rise — resulting in enormous returns on investment over learners’ lifetimes.
New Scientific Review of Learning Styles — from willatworklearning.com Excerpt: Just last month at the Debunker Club, we debunked the learning-styles approach to learning design based on our previous compilation of learning-styles debunking resources. .
Now, there’s a new research review by Daniel Willingham, debunker extraordinaire, and colleagues. .
Willingham, D. T., Hughes, E. M., & Dobolyi, D. G. (2015). The scientific status of learning styles theories. Teaching of Psychology, 42(3), 266-271. http://dx.doi.org/10.1177/0098628315589505
What Teens Think About
Generally speaking, Rachael believed we give adolescents far too little credit. The passages in their lives are moments when they ask themselves important questions, such as these:
How does my life have meaning and purpose?
What gifts do I have that the world wants and needs?
To what or whom do I feel most deeply connected?
How can I rise above my fears and doubts?
What or who awakens or touches the spirit within me?
…
What Can Parents and Educators Do?
While parents and educators may have a hard time addressing issues of soul and spirit with their teens, it can help to be aware of some ways into the hearts and minds of young people that can make a difference. Here is what Rachael Kessler suggests in her landmark book, The Soul of Education.
We’re not talking about chemtrails, HAARP (High Frequency Active Auroral Research Program) or other weather warfare that has been featured in science fiction movies; the concerns were raised not a conspiracy theorist, but by climate scientist, geoengineering specialist and Rutgers University Professor Alan Robock. He “called on secretive government agencies to be open about their interest in radical work that explores how to alter the world’s climate.” If emerging climate-altering technologies can effectively alter the weather, Robock is “worried aboutwhowould control such climate-altering technologies.”
Exactly what I’ve been reflecting on recently.
***Who*** is designing, developing, and using the powerful technologies that are coming into play these days and ***for what purposes?***
Do these individuals care about other people?Or are they much more motivated by profit or power?
Given the increasingly potent technologies available today, we need people who care about other people.
Let me explain where I’m coming from here…
I see technologies as tools. For example, a pencil is a technology. On the positive side of things, it can be used to write or draw something. On the negative side of things, it could be used as a weapon to stab someone. It depends upon the user of the pencil and what their intentions are.
Let’s look at some far more powerful — and troublesome — examples.
DRONES
Drones could be useful…or they could be incredibly dangerous. Again, it depends on whois developing/programming them and for what purpose(s).Consider the posting from B.J. Murphy below (BTW, nothing positive or negative is meant by linking to this item, per se).
From DSC: I say this is an illustrative posting because if the inventor/programmer of this sort of drone wanted to poison someone, they surely could do so. I’m not even sure that this drone exists or not; it doesn’t matter, as we’re quickly heading that way anyway. So potentially, this kind of thing is very scary stuff.
We need people who care about other people.
Or see: Five useful ideas from the World Cup of Drones— from dezeen.com The article mentions some beneficial purposes of drones, such as for search and rescue missions or for assessing water quality. Some positive intentions, to be sure.
But again, it doesn’t take too much thought to come up with some rather frightening counter-examples.
GENE-RELATED RESEARCH
Or another example re: gene research/applications; an excerpt from:
It was also suggested that large-scale screens such as the one demonstrated in the current study could help researchers discover new cancer drugs that prevent tumors from becoming resistant.
From DSC:
Sounds like there could be some excellent, useful, positive uses for this technology. But who is to say which genes should be turned on and under what circumstances? In the wrong hands, there could be some dangerous uses involved in such concepts as well. Again, it goes back to those involved with designing, developing, selling, using these technologies and services.
ROBOTICS
Will robots be used for positive or negative applications?
Autonomous cars will be commonplace by 2025 and have a near monopoly by 2030, and the sweeping change they bring will eclipse every other innovation our society has experienced. They will cause unprecedented job loss and a fundamental restructuring of our economy, solve large portions of our environmental problems, prevent tens of thousands of deaths per year, save millions of hours with increased productivity, and create entire new industries that we cannot even imagine from our current vantage point.
One can see the potential for good and for bad from the above excerpt alone.
While the above items list mostly positive elements, there are those who fear that autonomous cars could be used by terrorists. That is, could a terrorist organization make some adjustments to such self-driving cars and load them up with explosives, then remotely control them in order to drive them to a certain building or event and cause them to explode?
Again, it depends upon whether the designers and users of a system care about other people.
BIG DATA / AI / COGNITIVE COMPUTING
The rise of machines that learn— from infoworld.com by Eric Knorr; with thanks to Oliver Hansen for his tweet on this A new big data analytics startup, Adatao, reminds us that we’re just at the beginning of a new phase of computing when systems become much, much smarter
Excerpt:
“Our warm and creepy future,” is how Miko refers to the first-order effect of applying machine learning to big data. In other words, through artificially intelligent analysis of whatever Internet data is available about us — including the much more detailed, personal stuff collected by mobile devices and wearables — websites and merchants of all kinds will become extraordinarily helpful. And it will give us the willies, because it will be the sort of personalized help that can come only from knowing us all too well.
They know who you are, what you like, and how you buy things. Researchers at MIT have matched up your Facebook (FB) likes, tweets, and social media activity with the products you buy. The results are a highly detailed and accurate profile of how much money you have, where you go to spend it and exactly who you are.
The study spanned three months and used the anonymous credit card data of 1.1 million people. After gathering the data, analysts would marry the findings to a person’s public online profile. By checking things like tweets and Facebook activity, researchers found out the anonymous person’s actual name 90% of the time.
Using digital to engage consumers will make the store a more interesting and – dare I say – fun place to shop. Such an enhanced in-store experience leads to more customer loyalty and a bigger basket at checkout. It also gives supermarkets a competitive edge over nearby stores not equipped with the latest technology.
Using video cameras in the ceilings of supermarkets to record shopper behavior is not new. But more retailers will analyze and use the resulting data this year. They will move displays around the store and perhaps deploy new traffic patterns that follow a shopper’s true path to purchase. The result will be increased sales.
Another interesting part of this video analysis that will become more important this year is facial recognition. The most sophisticated cameras are able to detect the approximate age and ethnicity of shoppers. Retailers will benefit from knowing, say, that their shopper base includes more Millennials and Hispanics than last year. Such valuable information will change product assortments.
Hundreds of leading scientists and technologists have joined Stephen Hawking and Elon Musk in warning of the potential dangers of sophisticated artificial intelligence, signing an open letter calling for research on how to avoid harming humanity.
The open letter, drafted by the Future of Life Institute and signed by hundreds of academics and technologists, calls on the artificial intelligence science community to not only invest in research into making good decisions and plans for the future, but to also thoroughly check how those advances might affect society.
SMART/ CONNECTED TVs
Potential for good:
Learning/training-related applications, networking, obtaining employment and new projects .
Addendum on 3/19/15 that gets at exactly the same thing here:
Teaching robots to be moral — from newyorker.com by Gary Marcus Excerpt: Robots and advanced A.I. could truly transform the world for the better—helping to cure cancer, reduce hunger, slow climate change, and give all of us more leisure time. But they could also make things vastly worse, starting with the displacement of jobs and then growing into something closer to what we see in dystopian films. When we think about our future, it is vital that we try to understand how to make robots a force for good rather than evil.
Jennifer A. Doudna, an inventor of a new genome-editing technique, in her office at the University of California, Berkeley. Dr. Doudna is the lead author of an article calling for a worldwide moratorium on the use of the new method, to give scientists, ethicists and the public time to fully understand the issues surrounding the breakthrough. Credit Elizabeth D. Herman for The New York Times
From DSC: Many K-12 schools as well as colleges and universities have been implementing more collaborative learning spaces. Amongst other things, such spaces encourage communication and collaboration — which involve listening. So here are some resources re: listening — a skill that’s not only underrated, but one that we don’t often try to consciously develop and think about in school. Perhaps in our quest for designing more meta-cognitive approaches to learning, we should consider how each of us and our students are actually listening…or not.
We are born with two ears, but only one mouth. Some people say that’s because we should spend twice as much time listening. Others claim it’s because listening is twice as difficult as talking.
Whatever the reason, developing good listening skills is critical to success. There is a difference between hearing and listening.
…
These statistics, gathered from sources including the International Listening Association* website, really drive the point home. They also demonstrate how difficult listening can be:
85 percent of our learning is derived from listening.
Listeners are distracted, forgetful and preoccupied 75 percent of the time.
Most listeners recall only 50 percent of what they have heard immediately after hearing someone say it.
People spend 45 percent of their waking time listening.
Most people remember only about 20 percent of what they hear over time.
People listen up to 450 words per minute, but think at about 1,000 to 3,000 words per minute.
There have been at least 35 business studies indicating listening as a top skill needed for success.
Even though most of us spend the majority of our day listening, it is the communication activity that receives the least instruction in school (Coakley & Wolvin, 1997). Listening training is not required at most universities (Wacker & Hawkins, 1995). Students who are required to take a basic communication course spend less than 7% of class and text time on listening (Janusik, 2002; Janusik & Wolvin, 2002). If students aren’t trained in listening, how do we expect them to improve their listening?
Listening is critical to academic success. An entire freshman class of over 400 students was given a listening test at the beginning of their first semester. After their first year of studies, 49% of students scoring low on the listening test were on academic probation, while only 4.42% of those scoring high on the listening test were on academic probation. Conversely, 68.5% of those scoring high on the listening test were considered Honors Students after the first year, while only 4.17% of those scoring low attained the same success (Conaway, 1982).
Students do not have a clear concept of listening as an active process that they can control. Students find it easier to criticize the speaker as opposed to the speaker’s message (Imhof, 1998).
Effective listening is associated with school success, but not with any major personality dimensions (Bommelje, Houston, & Smither, 2003).
Students report greater listening comprehension when they use the metacognitive strategies of asking pre-questions, interest management, and elaboration strategies (Imhof, 2001).
Students self-report less listening competencies after listening training than before. This could be because students realize how much more there is to listening after training (Ford, Wolvin, & Chung, 2000).
Listening and nonverbal communication training significantly influences multicultural sensitivity (Timm & Schroeder, 2000).
* The International Listening Association promotes the study, development,
and teaching of listening and the practice of effective listening skills and
techniques. ILA promotes effective listening by establishing a network of
professionals exchanging information including teaching methods, training
experiences and materials, and pursuing research as listening affects
humanity in business, education, and intercultural/international relations.
Course description:
Listening is a critical competency, whether you are interviewing for your first job or leading a Fortune 500 company. Surprisingly, relatively few of us have ever had any formal training in how to listen effectively. In this course, communications experts Tatiana Kolovou and Brenda Bailey-Hughes show how to assess your current listening skills, understand the challenges to effective listening (such as distractions!), and develop behaviors that will allow you to become a better listener—and a better colleague, mentor, and friend.
Topics include:
Recalling details
Empathizing
Avoiding distractions and the feeling of being overwhelmed
Wake-up call: How to really listen — from irishtimes.com by Sarah Green
Insights from the Harvard Business Review into the world of work Excerpt:
“It can be stated, with practically no qualification,” Ralph Nichols and Leonard Stevens wrote in a 1957 article in Harvard Business Review, “that people in general do not know how to listen. They have ears that hear very well, but seldom have they acquired the necessary aural skills which would allow those ears to be used effectively for what is called listening. ”
In a study of thousands of students and hundreds of business people, they found that most retained only half of what they heard – and this immediately after they had heard it. Six months later, most people only retained 25 per cent.
…
It all starts with actually caring what other people have to say, argues Christine Riordan, provost and professor of management at the University of Kentucky.
Listening with empathy consists of three specific sets of behaviours.
He identifies four biases that short-circuit this process, which he terms as generalized empirical method. All of these biases are not only at play in our individual lives, they also can determine how well organizations operate, even universities.
The first bias is dramatic bias—a flight from the drama of everyday living, an inability or unwillingness to pay attention to experience.
The second bias is individual bias—egoism. Making intelligent decisions requires moving beyond the worldview created by oneself for oneself.
The third bias is group bias. This predisposition is particularly rampant in organizational life.
The fourth bias is general bias—the bias of common sense. This bias views common sense uncritically.
…
The deleterious effect of bias explains why very smart people don’t understand what seems obvious in hindsight. The disappearance of entire industries gives testimony to the destructive power of institutional blindness.
…
There is no magic formula, no uniform model to follow. Universities must do the hard work of analyzing the needs of whom they serve and recreate themselves as viable, exciting institutions suited for a new age.
The universities left standing decades from now will have gone through this enlightening, but painful, process and look in hindsight at the insight they achieved.