From DSC: I just wanted to include some excerpts (see below) from Gartner’s 100 Data and Analytics Predictions Through 2021 report. I do so to illustrate how technology’s impact continues to expand/grow in influence throughout many societies around the globe, as well as to say that if you want a sure thing job in the next 1-15 years, I would go into studying data science and/or artificial intelligence!
Excerpts:
As evidenced by its pervasiveness within our vast array of recently published Predicts 2017 research, it is clear that data and analytics are increasingly critical elements across most industries, business functions and IT disciplines. Most significantly, data and analytics are key to a successful digital business. This collection of more than 100 data-and-analytics-related Strategic Planning Assumptions (SPAs) or predictions through 2021, heralds several transformations and challenges ahead that CIOs and data and analytics leaders should embrace and include in their planning for successful strategies. Common themes across the discipline in general, and within particular business functions and industries, include:
Artificial intelligence (AI) is emerging as a core business and analytic competency. Beyond yesteryear’s hard-coded algorithms and manual data science activities, machine learning (ML) promises to transform business processes, reconfigure workforces, optimize infrastructure behavior and blend industries through rapidly improved decision making and process optimization.
Natural language is beginning to play a dual role in many organizations and applications as a source of input for analytic and other applications, and a variety of output, in addition to traditional analytic visualizations.
Information itself is being recognized as a corporate asset (albeit not yet a balance sheet asset), prompting organizations to become more disciplined about monetizing, managing and measuring it as they do with other assets. This includes “spending” it like cash, selling/licensing it to others, participating in emerging data marketplaces, applying asset management principles to improve its quality and availability, and quantifying its value and risks in a variety of ways.
Smart devices that both produce and consume Internet of Things (IoT) data will also move intelligent computing to the edge of business functions, enabling devices in almost every industry to operate and interact with humans and each other without a centralized command and control. The resulting opportunities for innovation are unbounded.
Trust becomes the watchword for businesses, devices and information, leading to the creation of digital ethics frameworks, accreditation and assessments. Most attempts at leveraging blockchain as a trust mechanism fail until technical limitations, particularly performance, are solved.
…
Education
Significant changes to the global education landscape have taken shape in 2016, and spotlight new and interesting trends for 2017 and beyond. “Predicts 2017: Education Gets Personal” is focused on several SPAs, each uniquely contributing to the foundation needed to create the digitalized education environments of the future. Organizations and institutions will require new strategies to leverage existing and new technologies to maximize benefits to the organization in fresh and
innovative ways.
By 2021, more than 30% of institutions will be forced to execute on a personalization strategy to maintain student enrollment.
By 2021, the top 100 higher education institutions will have to adopt AI technologies to stay competitive in research.
…
Artificial Intelligence
Business and IT leaders are stepping up to a broad range of opportunities enabled by AI, including autonomous vehicles, smart vision systems, virtual customer assistants, smart (personal) agents and natural-language processing. Gartner believes that this new general-purpose technology is just beginning a 75-year technology cycle that will have far-reaching implications for every industry. In “Predicts 2017: Artificial Intelligence,” we reflect on the near-term opportunities, and the potential burdens and risks that organizations face in exploiting AI. AI is changing the way in which organizations innovate and communicate their processes, products and services.
Practical strategies for employing AI and choosing the right vendors are available to data and analytics leaders right now.
By 2019, more than 10% of IT hires in customer service will mostly write scripts for bot interactions.
Through 2020, organizations using cognitive ergonomics and system design in new AI projects will achieve long-term success four times more often than others.
By 2020, 20% of companies will dedicate workers to monitor and guide neural networks.
By 2019, startups will overtake Amazon, Google, IBM and Microsoft in driving the AI economy with disruptive business solutions.
By 2019, AI platform services will cannibalize revenues for 30% of market-leading companies. “Predicts 2017: Drones”
By 2020, the top seven commercial drone manufacturers will all offer analytical software packages.
“Predicts 2017: The Reinvention of Buying Behavior in Vertical-Industry Markets”
By 2021, 30% of net new revenue growth from industry-specific solutions will include AI technology.
…
Advanced Analytics and Data Science
Advanced analytics and data science are fast becoming mainstream solutions and competencies in most organizations, even supplanting traditional BI and analytics resources and budgets. They allow more types of knowledge and insights to be extracted from data. To become and remain competitive, enterprises must seek to adopt advanced analytics, and adapt their business models, establish specialist data science teams and rethink their overall strategies to keep pace with the competition. “Predicts 2017: Analytics Strategy and Technology” offers advice on overall strategy, approach and operational transformation to algorithmic business that leadership needs to build to reap the benefits.
By 2018, deep learning (deep neural networks [DNNs]) will be a standard component in 80% of data scientists’ tool boxes.
By 2020, more than 40% of data science tasks will be automated, resulting in increased productivity and broader usage by citizen data scientists.
By 2019, natural-language generation will be a standard feature of 90% of modern BI and analytics platforms.
By 2019, 50% of analytics queries will be generated using search, natural-language query or voice, or will be autogenerated.
By 2019, citizen data scientists will surpass data scientists in the amount of advanced analysis
produced.
By 2020, 95% of video/image content will never be viewed by humans; instead, it will be vetted by machines that provide some degree of automated analysis.
Through 2020, lack of data science professionals will inhibit 75% of organizations from achieving the full potential of IoT.
From DSC: When you read the article below, notice how many times these CIO’s mention that they’re tapping into streams of content
How to stay current with emerging tech: CIO tips— from enterprisersproject.com by Carla Rudder CIOs from Target, CVS Health, GE, and others share strategies for keeping up with the latest technologies
Excerpts:
I spend a fair amount of time looking at LinkedIn and Twitter. I’m particular about what I subscribe to. I see what people are interested in, so these social networks are good sources of information.
…
First, I set up Google alerts on topics that are of interest to me. I can skim these daily to keep abreast of what’s happening.
…
On the top-down side, I employ some different tactics. For example, I love using the Flipboard app to find relevant technology new stories targeted to my preferences. Also, I enjoy reading as much as I can about management and macro trends in technology and society.
…
First, pick some new media and follow it regularly. Examples that come to mind are Quartz, Vox, and Slate. Then, seek a balanced perspective from traditional media like The Wall Street Journal, The New York Times, The Atlantic, and The Economist.
…
When I can’t get out to conferences, I watch TED Talks. In fact, I watch a lot of talks that have nothing to do with IT, but they certainly help with leadership.
The most innovative AI breakthroughs, and the companies that promote them – such as DeepMind, Magic Pony, Aysadi, Wolfram Alpha and Improbable – have their origins in universities. Now AI will transform universities.
We believe AI is a new scientific infrastructure for research and learning that universities will need to embrace and lead, otherwise they will become increasingly irrelevant and eventually redundant.
Through their own brilliant discoveries, universities have sown the seeds of their own disruption. How they respond to this AI revolution will profoundly reshape science, innovation, education – and society itself.
…
As AI gets more powerful, it will not only combine knowledge and data as instructed, but will search for combinations autonomously. It can also assist collaboration between universities and external parties, such as between medical research and clinical practice in the health sector.
The implications of AI for university research extend beyond science and technology.
…
When it comes to AI in teaching and learning, many of the more routine academic tasks (and least rewarding for lecturers), such as grading assignments, can be automated. Chatbots, intelligent agents using natural language, are being developed by universities such as the Technical University of Berlin; these will answer questions from students to help plan their course of studies.
Virtual assistants can tutor and guide more personalized learning. As part of its Open Learning Initiative (OLI), Carnegie Mellon University has been working on AI-based cognitive tutors for a number of years. It found that its OLI statistics course, run with minimal instructor contact, resulted in comparable learning outcomes for students with fewer hours of study. In one course at the Georgia Institute of Technology, students could not tell the difference between feedback from a human being and a bot.
I built an Amazon Alexa skill called Introduction to Computing Flashcards. In using the skill, or Amazon Alexa app, students are able to listen to Alexa and then answer questions. Alexa helps students prepare for an exam by speaking definitions and then waiting for their identification. In addition to quizzing the student, Alexa is also keeping track of the correct answers. If a student answers five questions correctly, Alexa shares a game code, which is worth class experience points in the course gamification My Game app.
…
Certainly, exam preparation apps are one way to use digital assistants in education. As development and publishing of Amazon Alexa skills becomes easier, faculty will be able to produce such skills just as easily as they now create PowerPoints. Given the basic code available through Amazon tutorials, it takes 20 minutes to create a new exam preparation app. Basic voice experience Amazon Alexa skills can take as much as five minutes to complete.
Universities can publish their campus news through the Alexa Flash Briefing. This type of a skill can publish news, success stories, and other events associated with the campus.
…
If you are a faculty member, how can you develop your first Amazon Alexa skill? You can use any of the tutorials already available. You can also participate in an Amazon Alexa classroom training provided by Alexa Dev Days. It is possible that schools or maker spaces near you offer in-person developer sessions. You can use meetup.com to track these opportunities.
From DSC: Given the increasing use of robotics, automation, and artificial intelligence…how should the question of “What sort of education will you need to be employable in the future?” impact what’s being taught within K-12 & within higher education? Should certain areas within higher education, for example, start owning this research, as well as the strategic planning and whether changes are needed to the core curricula for this increasingly important trend?
The future’s coming at us fast — perhaps faster than we think. It seems prudent to work through some potential scenarios and develop plans for those various scenarios now, rather than react to this trend at some point in the future. If we wait, we’ll be trying to “swim up the backside of the wave” as my wise and wonderful father-in-law would say.
The above reflections occurred after I reviewed the posting out at cmrubinworld.com (with thanks to @STEMbyThomas for this resource):
The Global Search for Education: What Does My Robot Think? Excerpt: The Global Search for Education is pleased to welcome Ling Lee, Co-Curator of Robots and the Contemporary Science Manager for Exhibitions at the Science Museum in London, to discuss the impact of robots on our past and future.
The Enterprise Gets Smart Companies are starting to leverage artificial intelligence and machine learning technologies to bolster customer experience, improve security and optimize operations.
Excerpt:
Assembling the right talent is another critical component of an AI initiative. While existing enterprise software platforms that add AI capabilities will make the technology accessible to mainstream business users, there will be a need to ramp up expertise in areas like data science, analytics and even nontraditional IT competencies, says Guarini.
“As we start to see the land grab for talent, there are some real gaps in emerging roles, and those that haven’t been as critical in the past,” Guarini says, citing the need for people with expertise in disciplines like philosophy and linguistics, for example. “CIOs need to get in front of what they need in terms of capabilities and, in some cases, identify potential partners.”
Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.
Research Issues
1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
How can we grow our prosperity through automation while maintaining people’s resources and purpose?
How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
What set of values should AI be aligned with, and what legal and ethical status should it have?
3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.
4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
Ethics and Values
6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14) Shared Benefit: AI technologies should benefit and empower as many people as possible.
15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
Longer-term Issues
19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.
Excerpts:
Creating human-level AI: Will it happen, and if so, when and how? What key remaining obstacles can be identified? How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?
Panel with Anca Dragan (Berkeley), Demis Hassabis (DeepMind), Guru Banavar (IBM), Oren Etzioni (Allen Institute), Tom Gruber (Apple), Jürgen Schmidhuber (Swiss AI Lab), Yann LeCun (Facebook/NYU), Yoshua Bengio (Montreal) (video)
Superintelligence: Science or fiction? If human level general AI is developed, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? (video)
Panel with Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind): If we succeed in building human-level AGI, then what are likely outcomes? What would we like to happen?
Panel with Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Richard Mallah (FLI), Stefano Ermon (Stanford), Viktoriya Krakovna (DeepMind/FLI): Technical research agenda: What can we do now to maximize the chances of a good outcome? (video)
Law, policy & ethics: How can we update legal systems, international treaties and algorithms to be more fair, ethical and efficient and to keep pace with AI?
“Every child is a genius in his or her own way. VR can be the key to awakening the genius inside.”
This is the closing line of a new research study currently making its way out of China. Conducted by Beijing Bluefocus E-Commerce Co., Ltd and Beijing iBokan Wisdom Mobile Internet Technology Training Institution, the study takes a detailed look at the different ways virtual reality can make public education more effective.
“Compared with traditional education, VR-based education is of obvious advantage in theoretical knowledge teaching as well as practical skills training. In theoretical knowledge teaching, it boasts the ability to make abstract problems concrete, and theoretical thinking well-supported. In practical skills training, it helps sharpen students’ operational skills, provides an immersive learning experience, and enhances students’ sense of involvement in class, making learning more fun, more secure, and more active,” the study states.
CALIFORNIA — Acer Starbreeze, Google, HTC VIVE, Facebook’s Oculus, Samsung, and Sony Interactive Entertainment [on 12/7/16] announced the creation of a non-profit organization of international headset manufacturers to promote the growth of the global virtual reality (VR) industry. The Global Virtual Reality Association (GVRA) will develop and share best practices for industry and foster dialogue between public and private stakeholders around the world.
The goal of the Global Virtual Reality Association is to promote responsible development and adoption of VR globally. The association’s members will develop and share best practices, conduct research, and bring the international VR community together as the technology progresses. The group will also serve as a resource for consumers, policymakers, and industry interested in VR.
VR has the potential to be the next great computing platform, improving sectors ranging from education to healthcare, and contribute significantly to the global economy. Through research, international engagement, and the development of best practices, the founding companies of the Global Virtual Reality Association will work to unlock and maximize VR’s potential and ensure those gains are shared as broadly around the world as possible.
Occipital announced today that it is launching a mixed reality platform built upon its depth-sensing technologies called Bridge. The headset is available for $399 and starts shipping in March; eager developers can get their hands on an Explorer Edition for $499, which starts shipping next week.
From DSC: While I hope that early innovators in the AR/VR/MR space thrive, I do wonder what will happen if and when Apple puts out their rendition/version of a new form of Human Computer Interaction (or forms) — such as integrating AR-capabilities directly into their next iPhone.
Enterprise augmented reality applications ready for prime time — from internetofthingsagenda.techtarget.com by Beth Stackpole Pokémon Go may have put AR on the map, but the technology is now being leveraged for enterprise applications in areas like marketing, maintenance and field service.
Excerpt:
Unlike virtual reality, which creates an immersive, computer-generated environment, the less familiar augmented reality, or AR, technology superimposes computer-generated images and overlays information on a user’s real-world view. This computer-generated sensory data — which could include elements such as sound, graphics, GPS data, video or 3D models — bridges the digital and physical worlds. For an enterprise, the applications are boundless, arming workers walking the warehouse or selling on the shop floor, for example, with essential information that can improve productivity, streamline customer interactions and deliver optimized maintenance in the field.
2016 is fast drawing to a close. And while many will be glad to see the back of it, for those of us who work and play with Virtual Reality, it has been a most exciting year.
By the time the bells ring out signalling the start of a new year, the total number of VR users will exceed 43 million. This is a market on the move, projected to be worth $30bn by 2020. If it’s to meet that valuation, then we believe 2017 will be an incredibly important year in the lifecycle of VR hardware and software development.
VR will be enjoyed by an increasingly mainstream audience very soon, and here we take a quick look at some of the trends we expect to develop over the next 12 months for that to happen.
IN an Australian first, education students will be able hone their skills without stepping foot in a classroom. Murdoch University has hosted a pilot trial of TeachLivE, a virtual reality environment for teachers in training.
The student avatars are able to disrupt the class in a range of ways that teachers may encounter such as pulling out mobile phones or losing their pen during class.
8 Cutting Edge Virtual Reality Job Opportunities— from appreal-vr.com by Yariv Levski Today we’re highlighting the top 8 job opportunities in VR to give you a current scope of the Virtual Reality job market.
The Epson Moverio BT-300, to give the smart glasses their full name, are wearable technology – lightweight, comfortable see-through glasses – that allow you to see digital data, and have a first person view (FPV) experience: all while seeing the real world at the same time. The applications are almost endless.
Volkswagen’s pivot away from diesel cars to electric vehicles is still a work in progress, but some details about its coming I.D. electric car — unveiled in Paris earlier this year — are starting to come to light. Much of the news is about an innovative augmented reality heads-up display Volkswagen plans to offer in its electric vehicles. Klaus Bischoff, head of the VW brand, says the I.D. electric car will completely reinvent vehicle instrumentation systems when it is launched at the end of the decade.
For decades, numerous research centers and academics around the world have been working the potential of virtual reality technology. Countless research projects undertaken in these centers are an important indicator that everything from health care to real estate can experience disruption in a few years.
…
Virtual Human Interaction Lab — Stanford University
Virtual Reality Applications Center — Iowa State University
Institute for Creative Technologies—USC
Medical Virtual Reality — USC
The Imaging Media Research Center — Korea Institute of Science and Technology
Virtual Reality & Immersive Visualization Group — RWTH Aachen University
Center For Simulations & Virtual Environments Research — UCIT
Duke immersive Virtual Environment —Duke University
Experimental Virtual Environments (EVENT) Lab for Neuroscience and Technology — Barcelona University
Immersive Media Technology Experiences (IMTE) — Norwegian University of Technology
Human Interface Technology Laboratory — University of Washington
Augmented Reality (AR) dwelled quietly in the shadow of VR until earlier this year, when a certain app propelled it into the mainstream. Now, AR is a household term and can hold its own with advanced virtual technologies. The AR industry is predicted to hit global revenues of $90 billion by 2020, not just matching VR but overtaking it by a large margin. Of course, a lot of this turnover will be generated by applications in the entertainment industry. VR was primarily created by gamers for gamers, but AR began as a visionary idea that would change the way that humanity interacted with the world around them. The first applications of augmented reality were actually geared towards improving human performance in the workplace… But there’s far, far more to be explored.
I stood at the peak of Mount Rainier, the tallest mountain in Washington state. The sounds of wind whipped past my ears, and mountains and valleys filled a seemingly endless horizon in every direction. I’d never seen anything like it—until I grabbed the sun.
Using my HTC Vive virtual reality wand, I reached into the heavens in order to spin the Earth along its normal rotational axis, until I set the horizon on fire with a sunset. I breathed deeply at the sight, then spun our planet just a little more, until I filled the sky with a heaping helping of the Milky Way Galaxy.
Virtual reality has exposed me to some pretty incredible experiences, but I’ve grown ever so jaded in the past few years of testing consumer-grade headsets. Google Earth VR, however, has dropped my jaw anew. This, more than any other game or app for SteamVR’s “room scale” system, makes me want to call every friend and loved one I know and tell them to come over, put on a headset, and warp anywhere on Earth that they please.
In VR architecture, the difference between real and unreal is fluid and, to a large extent, unimportant. What is important, and potentially revolutionary, is VR’s ability to draw designers and their clients into a visceral world of dimension, scale, and feeling, removing the unfortunate schism between a built environment that exists in three dimensions and a visualization of it that has until now existed in two.
Many of the VR projects in Architecture are focused on the final stages of design process, basically for selling a house to a client. Thomas sees the real potential in the early stages: when the main decisions need to be made. VR is so good for this, as it helps for non professionals to understand and grasp the concepts of architecture very intuitively. And this is what we talked mostly about.
A proposed benefit of virtual reality is that it could one day eliminate the need to move our fleshy bodies around the world for business meetings and work engagements. Instead, we’ll be meeting up with colleagues and associates in virtual spaces. While this would be great news for the environment and business people sick of airports, it would be troubling news for airlines.
Imagine during one of your future trials that jurors in your courtroom are provided with virtual reality headsets, which allow them to view the accident site or crime scene digitally and walk around or be guided through a 3D world to examine vital details of the scene.
How can such an evidentiary presentation be accomplished? A system is being developed whereby investigators use a robot system inspired by NASA’s Curiosity Mars rover using 3D imaging and panoramic videography equipment to record virtual reality video of the scene.6 The captured 360° immersive video and photographs of the scene would allow recreation of a VR experience with video and pictures of the original scene from every angle. Admissibility of this evidence would require a showing that the VR simulation fairly and accurately depicts what it represents. If a judge permits presentation of the evidence after its accuracy is established, jurors receiving the evidence could turn their heads and view various aspects of the scene by looking up, down, and around, and zooming in and out.
Unlike an animation or edited video initially created to demonstrate one party’s point of view, the purpose of this type of evidence would be to gather data and objectively preserve the scene without staging or tampering. Even further, this approach would allow investigators to revisit scenes as they existed during the initial forensic examination and give jurors a vivid rendition of the site as it existed when the events occurred.
The theme running throughout most of this year’s WinHEC keynote in Shenzhen, China was mixed reality. Microsoft’s Alex Kipman continues to be a great spokesperson and evangelist for the new medium, and it is apparent that Microsoft is going in deep, if not all in, on this version of the future. I, for one, as a mixed reality or bust developer, am very glad to see it.
As part of the presentation, Microsoft presented a video (see below) that shows the various forms of mixed reality. The video starts with a few virtual objects in the room with a person, transitions into the same room with a virtual person, then becomes a full virtual reality experience with Windows Holographic.
Robots have been a major focus in the technology world for decades and decades, but they and basic science, and for that matter everyday life, have largely been non-overlapping magisteria. That’s changed over the last few years, as robotics and every other field have come to inform and improve each other, and robots have begun to infiltrate and affect our lives in countless ways. So the only surprise in the news that the prestigious journal group Science has established a discrete Robotics imprint is that they didn’t do it earlier.
Editor Guang-Zhong Yang and president of the National Academy of Sciences Marcia McNutt introduce the journal:
In a mere 50 years, robots have gone from being a topic of science fiction to becoming an integral part of modern society. They now are ubiquitous on factory floors, build complex deep-sea installations, explore icy worlds beyond the reach of humans, and assist in precision surgeries… With this growth, the research community that is engaged in robotics has expanded globally. To help meet the need to communicate discoveries across all domains of robotics research, we are proud to announce that Science Robotics is open for submissions.
Today brought the inaugural issue of Science Robotics, Vol.1 Issue 1, and it’s a whopper. Despite having only a handful of articles, each is deeply interesting and shows off a different aspect of the robotics research world — though by no means do these few articles hit all the major regions of the field.
Science Robotics has been launched to cover the most important advances in the development and application of robots, with interest in hardware and software as well as social interactions and implications.
From molecular machines to large-scale systems, from outer space to deep-sea exploration, robots have become ubiquitous, and their impact on our lives and society is growing at an accelerating pace. Science Robotics has been launched to cover the most important advances in robot design, theory, and applications. Science Robotics promotes the communication of new ideas, general principles, and original developments. Its content will reflect broad and important new applications of robots (e.g., medical, industrial, land, sea, air, space, and service) across all scales (nano to macro), including the underlying principles of robotic systems covering actuation, sensor, learning, control, and navigation. In addition to original research articles, the journal also publishes invited reviews. There are also plans to cover opinions and comments on current policy, ethical, and social issues that affect the robotics community, as well as to engage with robotics educational programs by using Science Robotics content. The goal of Science Robotics is to move the field forward and cross-fertilize different research applications and domains.
Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.
GOALS
Support Best Practices
To support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.
Create an Open Platform for Discussion and Engagement
To provide a regular, structured platform for AI researchers and key stakeholders to communicate directly and openly with each other about relevant issues.
Advance Understanding
To advance public understanding and awareness of AI and its potential benefits and potential costs to act as a trusted and expert point of contact as questions/concerns arise from the public and others in the area of AI and to regularly update key constituents on the current state of AI progress.
From DSC:
The articles listed inthis PDF documentdemonstrate the exponential pace of technological change that many nations across the globe are currently experiencing and will likely be experiencing for the foreseeable future. As we are no longer on a linear trajectory, we need to consider what this new trajectory means for how we:
Educate and prepare our youth in K-12
Educate and prepare our young men and women studying within higher education
One thought that comes to mind…when we’re moving this fast, we need to be looking upwards and outwards into the horizons — constantly pulse-checking the landscapes. We can’t be looking down or be so buried in our current positions/tasks that we aren’t noticing the changes that are happening around us.
But that lack of training is not unusual; it’s the norm. Despite the increased emphasis in recent years on improving professors’ teaching skills, such training often focuses on incorporating technology or flipping the classroom, rather than on how to give a traditional college lecture. It’s also in part why the lecture—a mainstay of any introductory undergraduate course—is endangered.
…
But is it the college lecture itself that’s the problem—or the lecturer?
Concerns about the lecture derive from anecdotal impressions as well as research data, including one meta analysis of 225 studies looking at the effectiveness of traditional lectures versus active learning in undergraduate STEM courses. That analysis indicated that lecturing increased failure rates by 55 percent; active learning—meaning teaching methods that are more interactive than traditional lectures—resulted in better grades and a 36 percent drop in class failure rates. High grades and low failure rates were most pronounced in small classes that relied on active teaching, supporting the theory that more students might receive STEM degrees if active learning took the place of traditional lecturing.
Many people think riveting lecturers are naturally gifted, but public-speaking skills can be, and are, taught. The art of rhetoric was practiced and taught for millennia, beginning in ancient Greece over 2,000 years ago; oratory skills were a social asset in antiquity, a way to persuade, influence, and participate in civic life.
Today’s The Atlantic contains an article entitled “Should Colleges Really Eliminate the College Lecture?” that has really inspired me to write, in a way that the pending deadline on my book has not. Ordinarily I just ignore pieces like this except for maybe a tweet or two about them. But this time, I feel like this article has so many factually incorrect claims, glosses over so much research, and has such potential to spread bad ideas to a very wide audience that I felt the need to address its points one at a time. This is Part 1 of that response.
…
The article opens with a lament that, actually, I agree with completely: New Ph.D.’s do often lack the training in pedagogy that they need to be successful in their work. This training should include all forms of pedagogy, including lecture, and it should expose new instructors to the full range of pedagogies that are out there, as well as the research that informs their effectiveness (the concept of “evidence”: hold on to this idea) and the skill of selecting a combination of teaching methods that best suits the learning environment they are tasked with creating. Many universities are wising up to this need for training, but more need to get on board.
However from here, things start to go downhill…
…
And here, we find the lede that was buried by the headline: The whole problem with lecture is that we’re not well-trained enough in how to give great lectures. Training, insofar as it occurs at all, is focused on all these “modern” pedagogies and on technology. If we devoted as much training time to lecture as we did to the other stuff, then we’d see better results with lecturing. That is the claim as I understand it. It makes sense; but it’s wrong, and I’ll be explaining why as we go.
But this time, with this article, I felt that I needed to respond — because of how thoroughly wrong it is on basic and easily-checked facts, because I’m tired of my colleagues in higher ed making teaching decisions based on their own interests rather than students’, possibly because it’s getting near the end of the summer and I’m getting punchy. Whatever the reasons, here was Part 1 of the response in which we found (by actually checking the articles to which the original linked) that many of the claims about “eliminating lecture” in the first 1/4 of the article were flat-out wrong.
…
This is yet another instance of one of the worst things about this Atlantic article: The stubborn insistence that teaching in any way other than pure lecture is the same thing as “eliminating lecture”.
But keep this in mind: The discussion about active learning and lecture is not about what’s “new” or “traditional”, “modern” or “outdated”. It is, or at least ought to be, about what works best for student learning.
Here we have a meta-analysis of 225 existing studies that cuts across a wide spectrum of institutional types, student demographics, and instructional styles and shows a profound impact by active learning techniques on student learning and achievement.
I’m not sure what your reaction will be when you read that PNAS study [here]. But I will go out on a limb and say that any college or university professor who gives half of a damn about the well-being of his or her students will read that study, and then stop and at least think for a moment about whether his or her teaching in the classroom is part of the problem or part of the solution.
Our students need a learning environment that is supported by an instructor but which does not depend on the instructor bringing his or her “A” game to every class meeting. This is what active learning provides. It is what lecturing most definitely does not provide, and “more training” won’t change this.
Today, Microsoft unveiled new features for Word—Researcher and Editor—designed to make the application even smarter and easier to use.
When writing a research paper, a tech news article—or a political speech—proper citation is important. Everyone knows and expects that a writer has scoured other resources and has pulled thoughts and concepts from them. Having your work referenced and emulated is flattering, but nobody likes plagiarism. It’s important to give credit where credit is due by citing the original works used as reference, and Microsoft is making that simpler and more automated with the new Researcher feature in Word.
A blog post from Microsoft describes the new feature, “Researcher is a new service in Word that helps you find and incorporate reliable sources and content for your paper in fewer steps. Right within your Word document you can explore material related to your topic and add it—and its properly-formatted citation—in one click. Researcher uses the Bing Knowledge Graph to pull in the appropriate content from the web and provide structured, safe and credible information.”
The burgeoning field of Virtual Reality — or VR as it is commonly known — is a vehicle for telling stories through 360-degree visuals and sound that put you right in the middle of the action, be it at a crowded Syrian refugee camp, or inside the body of an 85-year-old with a bad hip and cataracts. Because of VR’s immersive properties, some people describe the medium as “the ultimate empathy machine.” But can it make people care about something as fraught and multi-faceted as homelessness?
A study in progress at Stanford’s Virtual Human Interaction Lab explores that question, and I strapped on an Oculus Rift headset (one of the most popular devices people currently use to experience VR) to look for an answer.
A new way of understanding homelessness
The study, called Empathy at Scale, puts participants in a variety of scenes designed to help them imagine the experience of being homeless themselves.
When designing a program or product, many education leaders and ed-tech developers want to start with the best knowledge available on how students learn. Unfortunately, this is easier said than done.
Although thousands of academic articles are published every year, busy education leaders and product developers often don’t know where to start, or don’t have time to sift through and find studies that are relevant to their work. As pressure mounts for “evidence-based” practices and “research-based” products, many in the education community are frustrated, and want an easier way to find information that will help them deliver stronger programs and products — and results. We need better tools to help make research more accessible for everyday work in education.
The Digital Promise Research Map meets this need by connecting education leaders and product developers with research from thousands of articles in education and the learning sciences, along with easy-to-understand summaries on some of the most relevant findings in key research topics.
What does it mean to be a UX designer? Whether you land a job at a startup or a larger corporation, your role as UX designer will be directly involved in the process to make a product useful, usable and delightful for that company’s intended target user group. Whether you are managing a large team of UXers or flying solo, the UX process itself remains the same and in general works in this order:
User Research
User research involves speaking to real users in your target audience about your product. If the product doesn’t exist yet, it’s about speaking to users of similar products and finding out what they want from this kind of platform. If it’s a pre-existing product, you’ll be asking questions about how they feel navigating your current design, their success in reaching their goals, and if they find the information they’re looking for easily and intuitively. A number of methods are usually adopted for this part of the process, including: questionnaires, focus group discussion, task analysis, online surveys, persona creation and user journey map.
Design
During the design phase you’ll be primarily thinking about how your product/service can accommodate how the customer already behaves (as seen during User Research). The design of your product revolves around functionality and usability, rather than colors or pictures (these are established later by a visual designer). Having established during your user research what your users expect from your product or site, what their goals are and how they like to operate a system, it is functionality and usability that will be your focus now. During this phase you will be using the following techniques to design your user’s journey through the site: information architecture, wireframing, prototyping.
Testing
Testing allows you to check that the changes you made during the design phase (if redesigning an existing product) stand up to scrutiny. It’s a great way to eliminate problems or user difficulties that were unforeseen in the design phase before getting started on the implementation phase. Testing methods include: usability testing, remote user testing, a/b testing. (Bear in mind that testing can be repeated at any stage in the process, and often is to increase the quality of the design and fix any errors.)
Implementation
If you’ve not had much experience working with web developers, then it’s important to consider this crucial aspect of the role. During implementation you will be working intimately with developers to reach your end goal for a project. The developers will be working to transform your design ideas into a real, working website; how you approach this relationship will determine the success or failure of your project. Keeping your developers in the loop throughout the process will make this final phase easier for everyone involved; you as the UX designer will have realistic expectations of what the developers can produce (and in what time-frame) and the developers won’t get any nasty shocks at the last minute.
Officially, a UX designer is responsible for this entire process, and its execution. However, larger companies tend to break this role down into a few, smaller roles that focus entirely on one section. We will look at what these roles are in the next section.
What other roles fall under the ‘UX Design’ umbrella?…<read more here>
From DSC: A UX Designer, ideally, would be one of the people around the table in higher education that’s helping to create excellent learning experiences. How many organizations are using one? Probably not many. Instead, such duties are most likely being lumped into the role of the Instructional Designer or the Instructional Technologist — or is yet another hat that the faculty member is supposed to be wearing.
We first launched support for 360-degree videos back in March 2015. From musicians to athletes to brands, creators have done some incredible things with this technology. Now, they’ll be able to do even more to bring fans directly into their world, with 360-degree live streaming. And after years of live streaming Coachella for fans around the world who can’t attend the festival, this year we’re bringing you the festival like never before by live streaming select artist performances in 360 degrees this weekend. Starting today, we’re also launching spatial audio for on-demand YouTube videos. Just as watching a concert in 360 degrees can give you an unmatched immersive experience, spatial audio allows you to listen along as you do in real life, where depth, distance and intensity all play a role. Try out this playlist on your Android device.
CWRU was among the first in higher education to begin working with HoloLens, back in 2014. They’ve since discovered new ways the tech could help transform education. One of their current focuses is changing how students experience medical-science courses.
“This is a curriculum that hasn’t drastically changed in more than 100 years, because there simply hasn’t been another way,” says Mark Griswold, the faculty director for HoloLens at CWRU. “The mixed-reality of the HoloLens has the potential to revolutionize this education by bringing 3D content into the real world.”
“Imagine a physics class where you’re able to show how friction works. Imagine being able to experience gravity on Mars — by moving around virtually,” he says. “VR can make science, technology and art come alive.”
VR will soon become an open canvas for educators to create learning experiences. Eventually, fitting VR into the curriculum will be limited only by an instructor’s imagination and budget, says Christopher Sessums, the program director of research and evaluation at Johns Hopkins School of Education.
Burleson and and co-author Armanda Lewis imagine such technology in a year 2041 Holodeck, which Burleson’s NYU-X Lab is currently developing in prototype form, in collaboration with colleagues at NYU Courant, Tandon, Steinhardt, and Tisch.
“The “Holodeck” will support a broad range of transdisciplinary collaborations, integrated education, research, and innovation by providing a networked software/hardware infrastructure that can synthesize visual, audio, physical, social, and societal components,” said Burleson.
It’s intended as a model for the future of cyberlearning experience, integrating visual, audio, and physical (haptics, objects, real-time fabrication) components, with shared computation, integrated distributed data, immersive visualization, and social interaction to make possible large-scale synthesis of learning, research, and innovation.
…British television presenter Diane-Louise Jordan will guide students on a tour through Shakespeare’s hometown of Stratford-upon-Avon, including his childhood home and school; and the bard’s view of London, including the famous Globe Theatre where his plays were performed. (Shakespeare actually died April 23, which this year falls on a Saturday.)
Also see:
You can register to see the recording on that page as well.
Film Students To Compete in Virtual Reality Production Contest — from campustechnology.com by Michael Hart One of the first ever competitions involving virtual reality production will challenge college film students to create their own 360-degree films.
HBO and Discovery Communications announced today that they are partnering with 3D-graphics startup OTOY — both companies taking equity stakes. The partnership marks an effort by the two networks to evolve entertainment experiences beyond two dimensional television. Virtual reality, augmented reality, and even holograms were all highlighted as areas OTOY would help its traditional media partners explore.
TV knows it must push toward virtual and augmented reality
Apple was granted a patent today for a type of live interactive augmented reality (AR) video to be used in future iOS devices, indicating the company may soon enter the AR/VR game. The patent does not appear to be directly related to an AR/VR headset, but is certainly a step in that direction.
The patent describes Apple’s planned augmented reality technology as layered, live AR video that users can interact with via touchscreen. In the live video, objects can be identified and an information layer can be generated for them.
“In some implementations,” the patent text notes, “the information layer can include annotations made by a user through the touch sensitive surface.”
Virtual & Augmented Reality: Blooloop’s Guide to VR and AR — from blooloop.com Visitor attractions are racing to embrace Virtual and Augmented Reality technologies. But what are the potential opportunities and possible pitfalls of VR and AR?