Google has recently released a brand new version of Google Earth for both Chrome and Android. This new version has come with a slew of nifty features teachers can use for educational purposes with students in class. Following is a quick overview of the most fascinating features…
If you enjoyed this article, please consider sharing it!
From DSC: This short article presents a balanced approach, as it relays both the advantages and disadvantages of AI in our world.
Perhaps it will be one of higher education’s new tasks — to determine the best jobs to go into that will survive the next 5-10+ years and help you get up-to-speed in those areas. The liberal arts are very important here, as they lay a solid foundation that one can use to adapt to changing conditions and move into multiple areas. If the C-suite only sees the savings to the bottom line — and to *&^# with humanity (that’s their problem, not mine!) — then our society could be in trouble.
If you work in education, you’ll know there’s a HUGE array of applications, services, products and tools created to serve a multitude of functions in education.
Tools for teaching and learning, parent-teacher communication apps, lesson planning software, home-tutoring websites, revision blogs, SEN education information, professional development qualifications and more.
There are so many companies creating new products for education, though, that it can be difficult to keep up – especially with the massive volumes of planning and marking teachers have to do, never mind finding the time to actually teach!
So how do you know which ones are the best?
Well, as a team of people passionate about education and learning, we decided to do a bit of research to help you out.
We’ve asked some of the best and brightest in education for their opinions on the hottest EdTech of 2017. These guys are the real deal – experts in education, teaching and new tech from all over the world from England to India, to New York and San Francisco.
They’ve given us a list of 82 amazing, tried and tested tools…
From DSC: The ones that I mentioned that Giorgio included in his excellent article were:
AdmitHub – Free, Expert College Admissions Advice
Labster – Empowering the Next Generation of Scientists to Change the World
The Enterprise Gets Smart Companies are starting to leverage artificial intelligence and machine learning technologies to bolster customer experience, improve security and optimize operations.
Assembling the right talent is another critical component of an AI initiative. While existing enterprise software platforms that add AI capabilities will make the technology accessible to mainstream business users, there will be a need to ramp up expertise in areas like data science, analytics and even nontraditional IT competencies, says Guarini.
“As we start to see the land grab for talent, there are some real gaps in emerging roles, and those that haven’t been as critical in the past,” Guarini says, citing the need for people with expertise in disciplines like philosophy and linguistics, for example. “CIOs need to get in front of what they need in terms of capabilities and, in some cases, identify potential partners.”
Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.
1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
How can we grow our prosperity through automation while maintaining people’s resources and purpose?
How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
What set of values should AI be aligned with, and what legal and ethical status should it have?
3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.
4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
Ethics and Values
6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14) Shared Benefit: AI technologies should benefit and empower as many people as possible.
15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.
Creating human-level AI: Will it happen, and if so, when and how? What key remaining obstacles can be identified? How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?
Panel with Anca Dragan (Berkeley), Demis Hassabis (DeepMind), Guru Banavar (IBM), Oren Etzioni (Allen Institute), Tom Gruber (Apple), Jürgen Schmidhuber (Swiss AI Lab), Yann LeCun (Facebook/NYU), Yoshua Bengio (Montreal) (video)
Superintelligence: Science or fiction? If human level general AI is developed, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? (video)
Panel with Bart Selman (Cornell), David Chalmers (NYU), Elon Musk (Tesla, SpaceX), Jaan Tallinn (CSER/FLI), Nick Bostrom (FHI), Ray Kurzweil (Google), Stuart Russell (Berkeley), Sam Harris, Demis Hassabis (DeepMind): If we succeed in building human-level AGI, then what are likely outcomes? What would we like to happen?
Panel with Dario Amodei (OpenAI), Nate Soares (MIRI), Shane Legg (DeepMind), Richard Mallah (FLI), Stefano Ermon (Stanford), Viktoriya Krakovna (DeepMind/FLI): Technical research agenda: What can we do now to maximize the chances of a good outcome? (video)
Law, policy & ethics: How can we update legal systems, international treaties and algorithms to be more fair, ethical and efficient and to keep pace with AI?
This is an invitation to collaborate. In particular, it is an invitation to collaborate in framing how we look at and develop machine intelligence. Even more specifically, it is an invitation to collaborate in the construction of a Periodic Table of AI.
Let’s be honest. Thinking about Artificial Intelligence has proven to be difficult for us. We argue constantly about what is and is not AI. We certainly cannot agree on how to test for it. We have difficultly deciding what technologies should be included within it. And we struggle with how to evaluate it.
Even so, we are looking at a future in which intelligent technologies are becoming commonplace.
With that in mind, we propose an approach to viewing machine intelligence from the perspective of its functional components. Rather than argue about the technologies behind them, the focus should be on the functional elements that make up intelligence. By stepping away from howthese elements are implemented, we can talk about whatthey are and their roles within larger systems.
WHEN education fails to keep pace with technology, the result is inequality. Without the skills to stay useful as innovations arrive, workers suffer—and if enough of them fall behind, society starts to fall apart.That fundamental insight seized reformers in the Industrial Revolution, heralding state-funded universal schooling. Later, automation in factories and offices called forth a surge in college graduates. The combination of education and innovation, spread over decades, led to a remarkable flowering of prosperity.
Today robotics and artificial intelligence call for another education revolution. This time, however, working lives are so lengthy and so fast-changing that simply cramming more schooling in at the start is not enough. People must also be able to acquire new skills throughout their careers.
Unfortunately, as our special report in this issue sets out, the lifelong learning that exists today mainly benefits high achievers—and is therefore more likely to exacerbate inequality than diminish it. If 21st-century economies are not to create a massive underclass, policymakers urgently need to work out how to help all their citizens learn while they earn. So far, their ambition has fallen pitifully short.
At the same time on-the-job training is shrinking. In America and Britain it has fallen by roughly half in the past two decades. Self-employment is spreading, leaving more people to take responsibility for their own skills. Taking time out later in life to pursue a formal qualification is an option, but it costs money and most colleges are geared towards youngsters.
The classic model of education—a burst at the start and top-ups through company training—is breaking down. One reason is the need for new, and constantly updated, skills.
A college degree at the start of a working career does not answer the need for the continuous acquisition of new skills, especially as career spans are lengthening. Vocational training is good at giving people job-specific skills, but those, too, will need to be updated over and over again during a career lasting decades. “Germany is often lauded for its apprenticeships, but the economy has failed to adapt to the knowledge economy,” says Andreas Schleicher, head of the education directorate of the OECD, a club of mostly rich countries. “Vocational training has a role, but training someone early to do one thing all their lives is not the answer to lifelong learning.”
To remain competitive, and to give low- and high-skilled workers alike the best chance of success, economies need to offer training and career-focused education throughout people’s working lives. This special report will chart some of the efforts being made to connect education and employment in new ways, both by smoothing entry into the labour force and by enabling people to learn new skills throughout their careers. Many of these initiatives are still embryonic, but they offer a glimpse into the future and a guide to the problems raised by lifelong reskilling.
Individuals, too, increasingly seem to accept the need for continuous rebooting.
If you enjoyed this article, please consider sharing it!
UK government is driving the artificial intelligence agenda, pinpointing it as a future technology driving the fourth revolution and billing its importance on par with the steam engine.
The report on Artificial Intelligence by the Government Office for Science follows the recent House of Commons Committee report on Robotics and AI, setting out the opportunities and implications for the future of decision making. In a report which spans government deployment, ethics and the labour market, Digital Minister Matt Hancock provided a foreword which pushed AI as a technology which would benefit the economy and UK citizens.
“Ethics often falls behind the technology,” says Voithofer of Ohio State. Personal data becomes more abstract when it’s combined with other datasets or reused for multiple purposes, he adds. Say a device collects and anonymizes data about a student’s emotional patterns. Later on that information might be combined with information about her test scores and could be reassociated with her. Some students might object to colleges making judgments about their academic performance from indirect measurements of their emotional states.
A world where DNA can be rewritten to fix deadly diseases has moved a step closer after scientists announced they had genetically-edited the cells of a human for the first time using a groundbreaking technique.
A man in China was injected with modified immune cells which had been engineered to fight his lung cancer. Larger trials are scheduled to take place next year in the US and Beijing, which scientists say could open up a new era of genetic medicine.
The technique used is called Crispr, which works like tiny molecular scissors snipping away genetic code and replacing it with new instructions to build better cells.
Can you be sexually assaulted in virtual reality? And can anything be done to prevent it? Those are a few of the most pressing ethical questions technologists, investors and we the public will face as VR grows.
The scope of Alphabet’s ambition for the Google brand is clear: It wants Google’s information organizing brain to be embedded right at the domestic center — i.e. where it’s all but impossible for consumers not to feed it with a steady stream of highly personal data. (Sure, there’s a mute button on the Google Home, but the fact you have to push a button to shut off the ear speaks volumes… )
In other words, your daily business is Google’s business.
“We’re moving from a mobile-first world to an AI-first world,” said CEO Sundar Pichai…
But what’s really not OK, Google is the seismic privacy trade-offs involved here. And the way in which Alphabet works to skate over the surface of these concerns.
What he does not say is far more interesting, i.e. that in order to offer its promise of “custom convenience” — with predictions about restaurants you might like to eat at, say, or suggestions for how bad the traffic might be on your commute to work — it is continuously harvesting and data-mining your personal information, preferences, predilections, peccadilloes, prejudices… and so on and on and on. AI never stops needing data. Not where fickle humans are concerned.
Welcome to a world without work— from by Ryan Avent Automation and globalisation are combining to generate a world with a surfeit of labour and too little work
A new age is dawning. Whether it is a wonderful one or a terrible one remains to be seen. Look around and the signs of dizzying technological progress are difficult to miss. Driverless cars and drones, not long ago the stuff of science fiction, are now oddities that can occasionally be spotted in the wild and which will soon be a commonplace in cities around the world.
From DSC: I don’t see a world without work being good for us in the least. I think we humans need to feel that we are contributing to something. We need a purpose for living out our days here on Earth (even thoughthey are but a vapor). We need vision…goals to works towards as we seek to use the gifts, abilities, passions, and interests that the LORD gave to us. The author of the above article would also add that work:
Is a source of personal identity
It helps give structure to our days and our lives
It offers the possibility of personal fulfillment that comes from being of use to others
Is a critical part of the glue that holds society together and smooths its operation
Over the last generation, work has become ever less effective at performing these roles. That, in turn, has placed pressure on government services and budgets, contributing to a more poisonous and less generous politics. Meanwhile, the march of technological progress continues, adding to the strain.
We live in an age of transformative scientific powers, capable of changing the very nature of the human species and radically remaking the planet itself.
Advances in information technologies and artificial intelligence are combining with advances in the biological sciences; including genetics, reproductive technologies, neuroscience, synthetic biology; as well as advances in the physical sciences to create breathtaking synergies — now recognized as the Fourth Industrial Revolution.
Since these technologies will ultimately decide so much of our future, it is deeply irresponsible not to consider together whether and how to deploy them. Thankfully there is growing global recognition of the need for governance.
This then leads to the ethical implications of using robots. Embracing a number of areas of research, robot ethics considers whether the use of a device within a particular field is acceptable and also whether the device itself is behaving ethically. When it comes to robot babies there are already a number of issues that are apparent. Should “parents” be allowed to choose the features of their robot, for example? How might parents be counseled when returning their robot baby? And will that baby be used again in the same form?
Late last month, popular websites like Twitter, Pinterest, Reddit and PayPal went down for most of a day. The distributed denial-of-service attack that caused the outages, and the vulnerabilities that made the attack possible, was as much a failure of market and policy as it was of technology. If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the “Internet of Things” and increased regulation of what are now critical and life-threatening technologies. It’s no longer a question of if, it’s a question of when.
An additional market failure illustrated by the Dyn attack is that neither the seller nor the buyer of those devices cares about fixing the vulnerability. The owners of those devices don’t care. They wanted a webcam — or thermostat, or refrigerator — with nice features at a good price. Even after they were recruited into this botnet, they still work fine — you can’t even tell they were used in the attack. The sellers of those devices don’t care: They’ve already moved on to selling newer and better models. There is no market solution because the insecurity primarily affects other people. It’s a form of invisible pollution.
If you enjoyed this article, please consider sharing it!