From DSC: Though the jigsaw technique has been around for decades, it came to my mind the other day as we recently built a highly-collaborative, experimental learning space at our college — some would call it an active learning-based classroom. There are 7 large displays throughout the space, with each display being backed up by Crestron-related hardware and software that allows the faculty member to control what’s appearing on each display. For example, the professor can take what is on Group #1’s display and send the content from that display throughout the classroom. Or they can display something from a document camera or something from their own laptop, iPad, or smartphone. Students can plug in their devices (BYOD) and connect to the displays via HDMI cables (Phase I) and wirelessly (Phase II).
I like this type of setup because it allows for students to quickly and efficiently contribute their own content and the results of their own research to a discussion. Groups can present their content throughout the space.
With that in mind, here are some resources re: the jigsaw classroom/technique.
The jigsaw technique is a method of organizing classroom activity that makes students dependent on each other to succeed. It breaks classes into groups and breaks assignments into pieces that the group assembles to complete the (jigsaw) puzzle. It was designed by social psychologist Elliot Aronson to help weaken racial cliques in forcibly integrated schools.
The technique splits classes into mixed groups to work on small problems that the group collates into a final outcome. For example, an in-class assignment is divided into topics. Students are then split into groups with one member assigned to each topic. Working individually, each student learns about his or her topic and presents it to their group. Next, students gather into groups divided by topic. Each member presents again to the topic group. In same-topic groups, students reconcile points of view and synthesize information. They create a final report. Finally, the original groups reconvene and listen to presentations from each member. The final presentations provide all group members with an understanding of their own material, as well as the findings that have emerged from topic-specific group discussion.
We can do nothing to change the past, but we have enormous power to shape the future. Once we grasp that essential insight, we recognize our responsibility and capability for building our dreams of tomorrow and avoiding our nightmares.
–Edward Cornish
From DSC: This posting represents Part III in a series of such postings that illustrate how quickly things are moving (Part I and Part II) and to ask:
How do we collectively start talking about the future that we want?
Then, how do we go about creating our dreams, not our nightmares?
Most certainly, governments will be involved….but who else should be involved?
As I mentioned in Part I, I want to again refer toGerd Leonhard’s work as it is relevant here, Gerd asserts:
I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.
Looking at several items below, ask yourself…is this the kind of future that we want? There are some things mentioned below that could likely prove to be very positive and helpful. However, there are also some very troubling advancements and developments as well.
The point here is that we had better start talking and discussing the pros and cons of each one of these areas — and many more I’m not addressing here — or our dreams will turn into our nightmares and we will have missed what Edward Cornish and the World Future Society are often trying to get at.
Google just mastered one of the biggest feats in artificial intelligence since IBM’s Deep Blue beat Gary Kasparov at chess in 1997.
The search giant’s AlphaGo computer program swept the European champion of Go, a complex game with trillions of possible moves, in a five-game series, according Demis Hassabis, head of Google’s machine learning, who announced the feat in a blog post that coincided with an article in the journal Nature.
While computers can now compete at the grand master level in chess, teaching a machine to win at Go has presented a unique challenge since the game has trillions of possible moves.
Harvard University has been given $28M by the Intelligence Advanced Projects Activity (IARPA) to study why the human brain is significantly better at learning and retaining information than artificial intelligence (AI). The investment into this study could potentially help researchers develop AI that’s faster, smarter, and more like human brains.
What is digital ethics?
In our hyper-connected world, an explosion of data is combining with pattern recognition, machine learning, smart algorithms, and other intelligent software to underpin a new level of cognitive computing. More than ever, machines are capable of imitating human thinking and decision-making across a raft of workflows, which presents exciting opportunities for companies to drive highly personalized customer experiences, as well as unprecedented productivity, efficiency, and innovation. However, along with the benefits of this increased automation comes a greater risk for ethics to be compromised and human trust to be broken.
According to Gartner, digital ethics is the system of values and principles a company may embrace when conducting digital interactions between businesses, people and things. Digital ethics sits at the nexus of what is legally required; what can be made possible by digital technology; and what is morally desirable.
As digital ethics is not mandated by law, it is largely up to each individual organisation to set its own innovation parameters and define how its customer and employee data will be used.
An international team of researchers has developed a new algorithm that could one day help scientists reprogram cells to plug any kind of gap in the human body. The computer code model, called Mogrify, is designed to make the process of creating pluripotent stem cells much quicker and more straightforward than ever before.
A pluripotent stem cell is one that has the potential to become any type of specialised cell in the body: eye tissue, or a neural cell, or cells to build a heart. In theory, that would open up the potential for doctors to regrow limbs, make organs to order, and patch up the human body in all kinds of ways that aren’t currently possible.
The Japanese lettuce production company Spread believes the farmers of the future will be robots.
So much so that Spread is creating the world’s first farm manned entirely by robots. Instead of relying on human farmers, the indoor Vegetable Factory will employ robots that can harvest 30,000 heads of lettuce every day.
Don’t expect a bunch of humanoid robots to roam the halls, however; the robots look more like conveyor belts with arms. They’ll plant seeds, water plants, and trim lettuce heads after harvest in the Kyoto, Japan farm.
Drones are advancing everyday. They are getting larger, faster and more efficient to control. Meanwhile the medical field keeps facing major losses from emergency response vehicles not being able to reach their destination fast enough. Understandable so, I mean especially in the larger cities where traffic is impossible to move swiftly through. Red flashing lights atop or not, sometimes the roads are just not capable of opening up. It makes total sense that the future of ambulances would be paved in the open sky rather than unpredictable roads.
Creator company SoftBank said it planned to open the pop-up mobile store employing only Pepper robots by the end of March, according to Engadget.
The four foot-tall robots will be on hand to answer questions, provide directions and guide customers in taking out phone contracts until early April. It’s currently unknown what brands of phone Pepper will be selling.
BERKELEY, CA — (Marketwired) — 01/27/16 — Wise.io, which delivers machine learning applications to help enterprises provide a better customer experience, today announced the availability of Wise Auto Response, the first intelligent auto reply functionality for customer support organizations. Using machine learning to understand the intent of an incoming ticket and determine the best available response, Wise Auto Response automatically selects and applies the appropriate reply to address the customer issue without ever involving an agent. By helping customer service teams answer common questions faster, Wise Auto Response removes a high percentage of tickets from the queue, freeing up agents’ time to focus on more complex tickets and drive higher levels of customer satisfaction.
Akili Interactive Labs out of Boston has created a video game that they hope will help treat children diagnosed with attention-deficit hyperactivity disorder by teaching them to focus in a distracting environment.
The game, Project: EVO, is meant to be prescribed to children with ADHD as a medical treatment. And after gaining $30.5 million in funding, investors appear to believe in it. The company plans to use the funding to run clinical trials with plans to gain approval from the US Food and Drug Administration in order to be able to launch the game in late 2017.
Players will enter a virtual world filled with colorful distractions and be required to focus on specific tasks such as choosing certain objects while avoiding others. The game looks to train the portion of the brain designed to manage and prioritize all the information taken in at one time.
From DSC:
I’m not sure where the item below ultimately came from, but it was in one of those emails that came to me via a family member. It reminds me of how people come in and out of our lives — and that goes not only for parents, siblings, spouses, and other family members, but also for teachers, professors, coaches, mentors, pastors, managers, supervisors, etc. They all help us learn and grow…and then we no longer have them in our lives. It reminds me of a learning ecosystem — constantly changing and morphing.
So it’s very relevant not only to our personal lives, but a reminder to be thankful for those who have ridden a train with you, with me — if even for a brief period of time.
The Train of Life
At birth we boarded the train and met our parents, and we believe they will always travel on our side.
However, at some station our parents will step down from the train, leaving us on this journey alone.
As time goes by, other people will board the train; and they will be significant (i.e. our siblings, friends, children, and even the love of your life).
Many will step down and leave a permanent vacuum.
Others will go so unnoticed that we don’t realize they vacated their seats.
This train ride will be full of joy, sorrow, fantasy, expectations, hellos, goodbyes, and farewells.
Success consists of having a good relationship with all passengers requiring that we give the best of ourselves.
The mystery to everyone is: We do not know at which station we ourselves will step down.
So, we must live in the best way, love, forgive, and offer the best of who we are.
It is important to do this because when the time comes for us to step down and leave our seat empty we should leave behind beautiful memories for those who will continue to travel on the train of life.
I wish you a joyful journey on the train of life.
Reap success and give lots of love.
Lastly, I thank you for being one of the passengers on my train.
7 “Two things I ask of you, Lord; do not refuse me before I die: 8 Keep falsehood and lies far from me; give me neither poverty nor riches, but give me only my daily bread. 9 Otherwise, I may have too much and disown you and say, ‘Who is the Lord?’ Or I may become poor and steal, and so dishonor the name of my God.
4 Enter his gates with thanksgiving and his courts with praise; give thanks to him and praise his name. 5 For the Lord is good and his love endures forever; his faithfulness continues through all generations.
From DSC: This posting is meant to surface the need for debates/discussions, new policy decisions, and for taking the time to seriously reflect upon what type of future that we want. Given the pace of technological change, we need to be constantly asking ourselves what kind of future we want and then to be actively creating that future — instead of just letting things happen because they can happen. (i.e., just because something can be done doesn’t mean it should be done.)
Gerd Leonhard’s work is relevant here. In the resource immediately below, Gerd asserts:
I believe we urgently need to start debating and crafting a global Digital Ethics Treaty. This would delineate what is and is not acceptable under different circumstances and conditions, and specify who would be in charge of monitoring digressions and aberrations.
I am also including some other relevant items here that bear witness to the increasingly rapid speed at which we’re moving now.
A “robot revolution” will transform the global economy over the next 20 years, cutting the costs of doing business but exacerbating social inequality, as machines take over everything from caring for the elderly to flipping burgers, according to a new study.
As well as robots performing manual jobs, such as hoovering the living room or assembling machine parts, the development of artificial intelligence means computers are increasingly able to “think”, performing analytical tasks once seen as requiring human judgment.
In a 300-page report, revealed exclusively to the Guardian, analysts from investment bank Bank of America Merrill Lynch draw on the latest research to outline the impact of what they regard as a fourth industrial revolution, after steam, mass production and electronics.
“We are facing a paradigm shift which will change the way we live and work,” the authors say. “The pace of disruptive technological innovation has gone from linear to parabolic in recent years. Penetration of robots and artificial intelligence has hit every industry sector, and has become an integral part of our daily lives.”
Humans who have had their DNA genetically modified could exist within two years after a private biotech company announced plans to start the first trials into a ground-breaking new technique.
Editas Medicine, which is based in the US, said it plans to become the first lab in the world to ‘genetically edit’ the DNA of patients suffering from a genetic condition – in this case the blinding disorder ‘leber congenital amaurosis’.
Gartner predicts our digital future— from gartner.com by Heather Levy Gartner’s Top 10 Predictions herald what it means to be human in a digital world.
Excerpt:
Here’s a scene from our digital future: You sit down to dinner at a restaurant where your server was selected by a “robo-boss” based on an optimized match of personality and interaction profile, and the angle at which he presents your plate, or how quickly he smiles can be evaluated for further review. Or, perhaps you walk into a store to try on clothes and ask the digital customer assistant embedded in the mirror to recommend an outfit in your size, in stock and on sale. Afterwards, you simply tell it to bill you from your mobile and skip the checkout line.
These scenarios describe two predictions in what will be an algorithmic and smart machine driven world where people and machines must define harmonious relationships. In his session at Gartner Symposium/ITxpo 2016 in Orlando, Daryl Plummer, vice president, distinguished analyst and Gartner Fellow, discussed how Gartner’s Top Predictions begin to separate us from the mere notion of technology adoption and draw us more deeply into issues surrounding what it means to be human in a digital world.
But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction—as well as potential discrimination—are bound to follow.
The Tech Policy Lab brings together faculty and students from the School of Law, Information School and Computer Science & Engineering Department and other campus units to think through issues of technology policy. “Augmented Reality: A Technology and Policy Primer” is the lab’s first official white paper aimed at a policy audience. The paper is based in part on research presented at the 2015 International Joint Conference on Pervasive and Ubiquitous Computing, or UbiComp conference.
Along these same lines, also see:
Augmented Reality: Figuring Out Where the Law Fits— from rdmag.com by Greg Watry Excerpt:
With AR comes potential issues the authors divide into two categories. “The first is collection, referring to the capacity of AR to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability,” the researchers write. The second issue is display, which “raises a variety of complex issues ranging from possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling.”Current privacy law in the U.S. allows video and audio recording in areas that “do not attract an objectively reasonable expectation of privacy,” says Newell. Further, many uses of AR would be covered under the First Amendment right to record audio and video, especially in public spaces. However, as AR increasingly becomes more mobile, “it has the potential to record inconspicuously in a variety of private or more intimate settings, and I think these possibilities are already straining current privacy law in the U.S.,” says Newell.
Our first Big Think comes from Stuart Russell. He’s a computer science professor at UC Berkeley and a world-renowned expert in artificial intelligence. His Big Think?
“In the future, moral philosophy will be a key industry sector,” says Russell.
Translation? In the future, the nature of human values and the process by which we make moral decisions will be big business in tech.
But augmented reality will also bring challenges for law, public policy and privacy, especially pertaining to how information is collected and displayed. Issues regarding surveillance and privacy, free speech, safety, intellectual property and distraction — as well as potential discrimination — are bound to follow.
An excerpt from:
THREE: CHALLENGES FOR LAW AND POLICY
AR systems change human experience and, consequently, stand to challenge certain assumptions of law and policy. The issues AR systems raise may be divided into roughly two categories. The first is collection, referring to the capacity of AR devices to record, or at least register, the people and places around the user. Collection raises obvious issues of privacy but also less obvious issues of free speech and accountability. The second rough category is display, referring to the capacity of AR to overlay information over people and places in something like real-time. Display raises a variety of complex issues ranging from
possible tort liability should the introduction or withdrawal of information lead to injury, to issues surrounding employment discrimination or racial profiling. Policymakers and stakeholders interested in AR should consider what these issues mean for them. Issues related to the collection of information include…
Technology has progressed to the point where it’s possible for HR to learn almost everything there is to know about employees — from what they’re doing moment-to-moment at work to what they’re doing on their off hours. Guest poster Julia Scavicchio takes a long hard look at the legal and ethical implications of these new investigative tools.
Why on Earth does HR need all this data? The answer is simple — HR is not on Earth, it’s in the cloud.
The department transcends traditional roles when data enters the picture.
Many ethical questions posed through technology easily come and go because they seem out of this world.
Where will these technologies take us next? Well to know that we should determine what’s the best of the best now. Tech Insider talked to 18 AI researchers, roboticists, and computer scientists to see what real-life AI impresses them the most.
…
“The DeepMind system starts completely from scratch, so it is essentially just waking up, seeing the screen of a video game and then it works out how to play the video game to a superhuman level, and it does that for about 30 different video games. That’s both impressive and scary in the sense that if a human baby was born and by the evening of its first day was already beating human beings at video games, you’d be terrified.”
As technology advances, we are becoming increasingly dependent on algorithms for everything in our lives. Algorithms that can solve our daily problems and tasks will do things like drive vehicles, control drone flight, and order supplies when they run low. Algorithms are defining the future of business and even our everyday lives.
…
Sondergaard said that “in 2020, consumers won’t be using apps on their devices; in fact, they will have forgotten about apps. They will rely on virtual assistants in the cloud, things they trust. The post-app era is coming. The algorithmic economy will power the next economic revolution in the machine-to-machine age. Organizations will be valued, not just on their big data, but on the algorithms that turn that data into actions that ultimately impact customers.”
Robots are learning to say “no” to human orders — from quartz.com by Kit Eaton Excerpt:
It may seem an obvious idea that a robot should do precisely what a human orders it to do at all times. But researchers in Massachusetts are trying something that many a science fiction movie has already anticipated: They’re teaching robots to say “no” to some instructions. For robots wielding potentially dangerous-to-humans tools on a car production line, it’s pretty clear that the robot should always precisely follow its programming. But we’re building more-clever robots every day and we’re giving them the power to decide what to do all by themselves. This leads to a tricky issue: How exactly do you program a robot to think through its orders and overrule them if it decides they’re wrong or dangerous to either a human or itself? This is what researchers at Tufts University’s Human-Robot Interaction Lab are tackling, and they’ve come up with at least one strategy for intelligently rejecting human orders.
Addendum on 12/14/15:
Algorithms rule our lives, so who should rule them? — from qz.com by Dries Buytaert As technology advances and more everyday objects are driven almost entirely by software, it’s become clear that we need a better way to catch cheating software and keep people safe.
On their way to this month’s 70th United Nation’s General Assembly, the organization’s annual high-level meeting in New York, diplomats and world leaders will pass by a makeshift glass structure—both a glossy multi-media hub, and a gateway to an entirely different world.
The hub uses virtual reality to allow the UN attendees to see Jordan’s Zaatari camp for Syrian refugees through the eyes of a little girl. And, by using an immersive video portal, which will launch later this week, they will have the opportunity to have face-to-face conversations with residents of the camp.
The effort aims to put a human face on the high-level deliberations about the refugee crisis, which will likely dominate many conversations at the United Nations General Assembly (UNGA). UN Secretary General Ban Ki-Moon has called on the meeting to be “one of compassion, prevention and, above all, action.”
From DSC: VR-based apps have a great deal of potential to develop and practice greater empathy. See these related postings:
When it comes to virtual reality, the University of Maryland, Baltimore County is going for full immersion.
Armed with funding from the National Science Foundation, the university is set to build a virtual reality “environment” that’s designed to help researchers from different fields. It’s called PI2.
In the 15-by-20-foot room, stepping into virtual reality won’t necessarily require goggles.
A visualization wall at the University of Illinois at Chicago’s Electronic Visualization Lab.
UMBC officials say their project will be similar to this.(Photo courtesy of Planar)
Now you’re ready to turn your class into an immersive game, and everything you need is right here. With the help of these resources, you can develop your own gameful class, cook up a transmedia project, design a pervasive game or create your very own [Augmented Reality Game] ARG. Games aside, these links are useful for all types of creative learning projects. In most cases, what is on offer is free and/or web based, so only your imagination will be taxed.
If augmented reality could be a shared experience, it could change the way we will use the technology.
Something along these lines is currently in development at a Microsoft laboratory run by Jaron Lanier, one of the pioneers of VR since the 1980s through his company VPL Research. The project, called Comradre, allows multiple users to share virtual- and augmented-reality experiences, reports MIT Technology Review.
Because virtual reality takes place in a fully digital environment, it is not hugely difficult to put multiple users into the same virtual instance at the same time, wirelessly synced across multiple headsets.
vrfavs.com— some serious VR-related resources for you. Note: There are some NSFW items on there; so this is not for kids.
Together, virtual reality and augmented reality are expected to generate about $150 billion in revenue by the year 2020.
Of that staggering sum, according to data released today by Manatt Digital Media, $120 billion is likely to come from sales of augmented reality—with the lion’s share comprised of hardware, commerce, data, voice services, and film and TV projects—and $30 billion from virtual reality, mainly from games and hardware.
The report suggests that the major VR and AR areas that will be generating revenue fall into one of three categories: Content (gaming, film and TV, health care, education, and social); hardware and distribution (headsets, input devices like handheld controllers, graphics cards, video capture technologies, and online marketplaces); and software platforms and delivery services (content creation tools, capture, production, and delivery software, video game engines, analytics, file hosting and compression tools, and B2B and enterprise uses).
Talking about augmented reality technology in teaching and learning the first thing that comes to mind is this wonderful app called Aurasma. Since its release a few years ago, Aurasma gained so much in popularity and several teachers have already embraced it within their classrooms. For those of you who are not yet familiar with how Aurasma works and how to use in it in your class, this handy guide from Apple in Education is a great resource to start with.
The Oculus Touch virtual reality (VR) controllers finally have their first full videogames. A handful of titles were confirmed to support the kit back at the Oculus Connect 2 developer conference in September. But still one of the most impressive showcases of what these position-tracked devices can do exists in Oculus VR’s original tech demo, Toybox. [On 10/13/15], Oculus VR itself has released a new video that shows off what players are able to do within the software.
Much like sketching the first few lines on a blank canvas, the earliest prototypes of a VR project is an exciting time for fun and experimentation. Concepts evolve, interactions are created and discarded, and the demo begins to take shape.Competing with other 3D Jammers around the globe, Swedish game studio Pancake Storm has shared their #3DJam progress on Twitter, with some interesting twists and turns along the way. Pancake Storm started as a secondary school project for Samuel Andresen and Gabriel Löfqvist, who want to break into the world of VR development with their project, tentatively dubbed Wheel Smith and the Willchair.
Recently I learned about a new feature called Virtual Field Trips. In a partnership with 360 Cities, NearPod now gives teachers and students the opportunity to view pristine locations like the Taj Mahal, the Golden Gate Bridge, and The Great Wall of China. You can view famous architecture, famous artifacts, and even different planets! Virtual Field Trips are a great addition to any classroom.
Western University of Health Sciences in Pomona, Calif., has opened a first-of-its-kind virtual reality learning center that’s been designed to allow students from every program—dentistry, osteopathic medicine, veterinary medicine, physical therapy, and nursing—to learn through VR.
The Virtual Reality Learning Center currently houses four different VR technologies: the two zSpace displays, the Anatomage Virtual Dissection Table, the Oculus Rift, and Stanford anatomical models on iPad.
Robert W. Hasel, D.D.S., associate dean of simulation, immersion & digital learning at Western, says VR gives anatomical science teachers the ability to view and interact with anatomy in a way never before experienced. The virtual dissection table allows students to rotate the human body in 360 degrees, take it apart, identify specific structures, study individual systems, look at multiple views at the same time, take a trip inside the body, and look at holograms.
——————————————
——————————————
Addendum on 10/20/15:
Can Virtual Reality Replace the Cadaver Lab?— from centerdigitaled.com by Justine Brown Colleges are starting to use virtual reality platforms to augment or replace cadaver labs, saving universities hundreds of thousands of dollars.
28 “The Lord heard your words when you spoke to me, and the Lord said to me, ‘I have heard the words of this people which they have spoken to you. They have done well in all that they have spoken.
29 Oh that they had such a heart in them, that they would fear [and worship Me with awe-filled reverence and profound respect] and keep all My commandments always, so that it may go well with them and with their children forever!
5 Trust in the Lord with all your heart and lean not on your own understanding; 6 in all your ways submit to him, and he will make your paths straight.[a]