[On 9/24/18], I released the Top Tools for Learning 2018 , which I compiled from the results of the 12th Annual Digital Learning Tools Survey.
I have also categorised the tools into 30 different areas, and produced 3 sub-lists that provide some context to how the tools are being used:
Top 100 Tools for Personal & Professional Learning 2018 (PPL100): the digital tools used by individuals for their own self-improvement, learning and development – both inside and outside the workplace.
Top 100 Tools for Workplace Learning (WPL100): the digital tools used to design, deliver, enable and/or support learning in the workplace.
Top 100 Tools for Education (EDU100): the digital tools used by educators and students in schools, colleges, universities, adult education etc.
3 – Web courses are increasing in popularity. Although Coursera is still the most popular web course platform, there are, in fact, now 12 web course platforms on the list. New additions this year include Udacity and Highbrow (the latter provides daily micro-lessons). It is clear that people like these platforms because they can chose what they want to study as well as how they want to study, ie. they can dip in and out if they want to and no-one is going to tell them off – which is unlike most corporate online courses which have a prescribed path through them and their use is heavily monitored.
5 – Learning at work is becoming personal and continuous. The most significant feature of the list this year is the huge leap up the list that Degreed has made – up 86 places to 47th place – the biggest increase by any tool this year. Degreed is a lifelong learning platform and provides the opportunity for individuals to own their expertise and development through a continuous learning approach. And, interestingly, Degreed appears both on the PPL100 (at 30) and WPL100 (at 52). This suggests that some organisations are beginning to see the importance of personal, continuous learning at work. Indeed, another platform that underpins this, has also moved up the list significantly this year, too. Anders Pink is a smart curation platform available for both individuals and teams which delivers daily curated resources on specified topics. Non-traditional learning platforms are therefore coming to the forefront, as the next point further shows.
From DSC: Perhaps some foreshadowing of the presence of a powerful, online-based, next generation learning platform…?
At MIT Technology Review’s EmTech conference, Fang outlined recent work across academia that applies AI to protect critical national infrastructure, reduce homelessness, and even prevent suicides.
Andrew Moore, the new chief of Google Cloud AI, co-chairs a task force on AI and national security with deep defense sector ties.
Moore leads the task force with Robert Work, the man who reportedly helped to create Project Maven.
Moore has given various talks about the role of AI and defense, once noting that it was now possible to deploy drones capable of surveilling “pretty much the whole world.”
One former Googler told Business Insider that the hiring of Moore is a “punch in the face” to those employees.
The AI revolution is equally significant, and humanity must not make the same mistake again. It is imperative to address new questions about the nature of post-AI societies and the values that should underpin the design, regulation, and use of AI in these societies. This is why initiatives like the abovementioned AI4People and IEEE projects, the European Union (EU) strategy for AI, the EU Declaration of Cooperation on Artificial Intelligence, and the Partnership on Artificial Intelligence to Benefit People and Society are so important (see the supplementary materials for suggested further reading). A coordinated effort by civil society, politics, business, and academia will help to identify and pursue the best strategies to make AI a force for good and unlock its potential to foster human flourishing while respecting human dignity.
Ethical regulation of the design and use of AI is a complex but necessary task. The alternative may lead to devaluation of individual rights and social values, rejection of AI-based innovation, and ultimately a missed opportunity to use AI to improve individual wellbeing and social welfare.
Robot wars — from ethicaljournalismnetwork.org by James Ball How artificial intelligence will define the future of news
Excerpt:
There are two paths ahead in the future of journalism, and both of them are shaped by artificial intelligence.
The first is a future in which newsrooms and their reporters are robust: Thanks to the use of artificial intelligence, high-quality reporting has been enhanced. Not only do AI scripts manage the writing of simple day-to-day articles such as companies’ quarterly earnings updates, they also monitor and track masses of data for outliers, flagging these to human reporters to investigate.
Beyond business journalism, comprehensive sports stats AIs keep key figures in the hands of sports journalists, letting them focus on the games and the stories around them. The automated future has worked.
The alternative is very different. In this world, AI reporters have replaced their human counterparts and left accountability journalism hollowed out. Facing financial pressure, news organizations embraced AI to handle much of their day-to-day reporting, first for their financial and sports sections, then bringing in more advanced scripts capable of reshaping wire copy to suit their outlet’s political agenda. A few banner hires remain, but there is virtually no career path for those who would hope to replace them ? and stories that can’t be tackled by AI are generally missed.
That’s all great, but even if an AI is amazing, it will still fail sometimes. When the mistake is caused by a machine or an algorithm instead of a human, who is to blame?
This is not an abstract discussion. Defining both ethical and legal responsibility in the world of medical care is vital for building patients’ trust in the profession and its standards. It’s also essential in determining how to compensate individuals who fall victim to medical errors, and ensuring high-quality care. “Liability is supposed to discourage people from doing things they shouldn’t do,” says Michael Froomkin, a law professor at the University of Miami.
Alibaba looks to arm hotels, cities with its AI technology— from zdnet.com by Eileen Yu Chinese internet giant is touting the use of artificial intelligence technology to arm drivers with real-time data on road conditions as well as robots in the hospitality sector, where they can deliver meals and laundry to guests.
Excerpt:
Alibaba A.I. Labs’ general manager Chen Lijuan said the new robots aimed to “bridge the gap” between guest needs and their expected response time. Describing the robot as the next evolution towards smart hotels, Chen said it tapped AI technology to address painpoints in the hospitality sector, such as improving service efficiencies.
Alibaba is hoping the robot can ease hotels’ dependence on human labour by fulfilling a range of tasks, including delivering meals and taking the laundry to guests.
Accenture has enhanced the Accenture Intelligent Patient Platform with the addition of Ella and Ethan, two interactive virtual-assistant bots that use artificial intelligence (AI) to constantly learn and make intelligent recommendations for interactions between life sciences companies, patients, health care providers (HCPs) and caregivers. Designed to help improve a patient’s health and overall experience, the bots are part of Accenture’s Salesforce Fullforce Solutions powered by Salesforce Health Cloud and Einstein AI, as well as Amazon’s Alexa.
FRANKFURT AM MAIN (AFP) – German business software giant SAP published Tuesday an ethics code to govern its research into artificial intelligence (AI), aiming to prevent the technology infringing on people’s rights, displacing workers or inheriting biases from its human designers.
Episode Summary: This week on AI in Industry, we speak with Amir Saffari, Senior Vice President of AI at BenevolentAI, a London-based pharmaceutical company that uses machine learning to find new uses for existing drugs and new treatments for diseases.
In speaking with him, we aim to learn two things:
How will machine learning play a role in the phases of drug discovery, from generating hypotheses to clinical trials?
In the future, what are the roles of man and machine in drug discovery? What processes will machines automate and potentially do better than humans in this field?
A few other articles caught my eye as well:
This little robot swims through pipes and finds out if they’re leaking — from fastcompany.com by Adele Peters Lighthouse, U.S. winner of the James Dyson Award, looks like a badminton birdie and detects the suction of water leaving pipes–which is a lot of water that we could put to better use. .
Samsung’s New York AI center will focus on robotics — from engadget.com by Saqib Shah NYU’s AI Now Institute is close-by and Samsung is keen for academic input. Excerpt: Samsung now has an artificial intelligence center in New York City — its third in North America and sixth in total — with an eye on robotics; a first for the company. It opened in Chelsea, Manhattan on Friday, walking distance from NYU (home to its own AI lab) boosting Samsung’s hopes for an academic collaboration. .
Business schools bridge the artificial intelligence skills gap — from swisscognitive.ch Excerpt:
Business schools such as Kellogg, Insead and MIT Sloan have introduced courses on AI over the past two years, but Smith is the first to offer a full programme where students delve deep into machine learning.
…
“Technologists can tell you all about the technology but usually not what kind of business problems it can solve,” Carlsson says. With business leaders, he adds, it is the other way round — they have plenty of ideas about how to improve their company but little way of knowing what the new technology can achieve. “The foundational skills businesses need to hack the potential of AI is the understanding of what problems the tech is actually good at solving,” he says.
This time last year, we were getting our first taste of what mobile app developers could do in augmented reality with Apple’s ARKit, and most people had never heard of Animojis. Google’s AR platform was still Tango. Snapchat had just introduced its World Lens AR experiences. Most mobile AR experiences existing in the wild were marker-based offerings from the likes of Blippar and Zappar, or generic Pokémon GO knock-offs.
In last year’s NR50, published before the introduction of ARKit, only two of the top 10 professionals worked directly with mobile AR, and Apple CEO Tim Cook was ranked number 26, based primarily on his forward-looking statements about AR.
This year, Cook comes in at number one, with five others categorized under mobile AR in the overall top 10 of the NR30.
What a difference a year makes.
In just 12 months, we’ve seen mobile AR grow at a breakneck pace. Since Apple launched its AR toolkit, users have downloaded more than 13 million ARKit apps from the App Store, not including existing apps updated with ARKit capabilities. Apple has already updated its platform and will introduce even more new features to the public with the release of ARKit 2.0 this fall. Last year’s iPhone X also introduced a depth-sensing camera and AR Animojis that captured the imaginations of its users.
Augmented reality made its live broadcast debut for The Weather Channel in 2015. The technology helps on-air talent at the network to explain the science behind weather phenomena and tell more immersive stories. Powered by Unreal Engine, The Future Group’s Frontier platform will enable The Weather Channel to be able to show even more realistic AR content, such as accurately rendered storms and detailed cityscapes, all in real time.
From DSC: Imagine this type of thing in online-based learning, MOOCs, and/or even in blended learning based learning environments (i.e., in situations where learning materials are designed/created by teams of specialists). If that were the case, who needs to be trained to create these pieces? Will students be creating these types of pieces in the future? Hmmm….
Winners announced of the 2018 Journalism 360 Challenge — from vrfocus.com The question of “How might we experiment with immersive storytelling to advance the field of journalism?” looks to be answered by 11 projects.
Excerpt:
The eleven winners were announced on 9/11/18 of a contest being held by the Google News Initiative, Knight Foundation and Online News Association. The 2018 Journalism 360 Challenge asked people the question “How might we experiment with immersive storytelling to advance the field of journalism?” and it generated over 400 responses.
VR makes people feel like they’re really there. The “intellectual and physiological reactions” to constructs and events in VR are the same — “and sometimes identical” — to a person’s reactions in the real world;
3D technologies facilitate active and experiential learning. AR, for example, lets users interact with an object in ways that aren’t possible in the physical world — such as seeing through surfaces or viewing data about underlying objects. And with 3D printing, learners can create “physical objects that might otherwise exist only simulations”; and
Simulations allow for scaling up of “high-touch, high-cost learning experiences.” Students may be able to go through virtual lab activities, for instance, even when a physical lab isn’t available.
Common challenges included implementation learning curves, instructional design, data storage of 3D images and effective cross-departmental collaboration.
“One significant result from this research is that it shows that these extended reality technologies are applicable across a wide spectrum of academic disciplines,” said Malcolm Brown, director of learning initiatives at Educause, in a statement. “In addition to the scientific disciplines, students in the humanities, for example, can re-construct cities and structures that no longer exist. I think this study will go a long way in encouraging faculty, instructional designers and educational technologists across higher education to further experiment with these technologies to vivify learning experiences in nearly all courses of study.”
The first collaborative VR molecular modeling application was released August 29 to encourage hands-on chemistry experimentation.
The open-source tool is free for download now on Oculus and Steam.
Nanome Inc., the San Diego-based start-up that built the intuitive application, comprises UCSD professors and researchers, web developers and top-level pharmaceutical executives.
“With our tool, anyone can reach out and experience science at the nanoscale as if it is right in front of them. At Nanome, we are bringing the craftsmanship and natural intuition from interacting with these nanoscale structures at room scale to everyone,” McCloskey said.
From DSC: While VR will have its place — especially for timeswhen you need to completely immerse yourself into another environment — I think AR and MR will be much larger and have a greater variety of applications. For example, I could see where instructions on how to put something together in the future could use AR and/or MR to assist with that process. The system could highlight the next part that I’m looking for and then highlight the corresponding parts where it goes — and, if requested, can show me a clip on how it fits into what I’m trying to put together.
Workers with mixed-reality solutions that enable remote assistance, spatial planning, environmentally contextual data, and much more,” Bardeen told me. With the HoloLens Firstline Workers workers conduct their usual, day-to-day activities with the added benefit of a heads-up, hands-free, display that gives them immediate access to valuable, contextual information. Microsoft says speech services like Cortana will be critical to control along with gesture, according to the unique needs of each situation.
Expect new worker roles. What constitutes an “information worker” could change because mixed reality will allow everyone to be involved in the collection and use of information. Many more types of information will become available to any worker in a compelling, easy-to-understand way.
ZHENGZHOU, China — In the Chinese city of Zhengzhou, a police officer wearing facial recognition glasses spotted a heroin smuggler at a train station.
In Qingdao, a city famous for its German colonial heritage, cameras powered by artificial intelligence helped the police snatch two dozen criminal suspects in the midst of a big annual beer festival.
In Wuhu, a fugitive murder suspect was identified by a camera as he bought food from a street vendor.
With millions of cameras and billions of lines of code, China is building a high-tech authoritarian future. Beijing is embracing technologies like facial recognition and artificial intelligence to identify and track 1.4 billion people. It wants to assemble a vast and unprecedented national surveillance system, with crucial help from its thriving technology industry.
In some cities, cameras scan train stations for China’s most wanted. Billboard-size displays show the faces of jaywalkers and list the names of people who don’t pay their debts. Facial recognition scanners guard the entrances to housing complexes. Already, China has an estimated 200 million surveillance cameras — four times as many as the United States.
Such efforts supplement other systems that track internet use and communications, hotel stays, train and plane trips and even car travel in some places.
From DSC: A veeeeery slippery slope here. The usage of this technology starts out as looking for criminals, but then what’s next? Jail time for people who disagree w/ a government official’s perspective on something? Persecution for people seen coming out of a certain place of worship?
Lehi, UT, May 29, 2018 (GLOBE NEWSWIRE) — Today, fast-growing augmented reality startup, Seek, is launching Seek Studio, the world’s first mobile augmented reality studio, allowing anybody with a phone and no coding expertise required, to create their own AR experiences and publish them for the world to see. With mobile AR now made more readily available, average consumers are beginning to discover the magic that AR can bring to the palm of their hand, and Seek Studio turns everyone into a creator.
To make the process incredibly easy, Seek provides templates for users to create their first AR experiences. As an example, a user can select a photo on their phone, outline the portion of the image they want turned into a 3D object and then publish it to Seek. They will then be able to share it with their friends through popular social networks or text. A brand could additionally upload a 3D model of their product and publish it to Seek, providing an experience for their customers to easily view that content in their own home. Seek Studio will launch with 6 templates and will release new ones every few days over the coming months to constantly improve the complexity and types of experiences possible to create within the platform.
Apple unveiled its new augmented reality file format, as well as ARKit 2.0, at its annual WWDC developer conference today. Both will be available to users later this year with iOS 12.
The tech company partnered with Pixar to develop the AR file format Universal Scene Description (USDZ) to streamline the process of sharing and accessing augmented reality files. USDZ will be compatible with tools like Adobe, Autodesk, Sketchfab, PTC, and Quixel. Adobe CTO Abhay Parasnis spoke briefly on stage about how the file format will have native Adobe Creative Cloud support, and described it as the first time “you’ll be able to have what you see is what you get (WYSIWYG) editing” for AR objects.
With a starting focus on University-level education and vocational schools in sectors such as mechanical engineering, VivEdu branched out to K-12 education in 2018, boasting a comprehensive VR approach to learning science, technology, engineering, mathematics, and art for kids.
That roadmap, of course, is just beginning. Which is where the developers—and those arm’s-length iPads—come in. “They’re pushing AR onto phones to make sure they’re a winner when the headsets come around,” Miesnieks says of Apple. “You can’t wait for headsets and then quickly do 10 years’ worth of R&D on the software.”
To fully realize the potential will require a broad ecosystem. Adobe is partnering with technology leaders to standardize interaction models and file formats in the rapidly growing AR ecosystem. We’re also working with leading platform vendors, open standards efforts like usdz and glTF as well as media companies and the creative community to deliver a comprehensive AR offering. usdz is now supported by Apple, Adobe, Pixar and many others while glTF is supported by Google, Facebook, Microsoft, Adobe and other industry leaders.
There are a number of professionals who would find the ability to quickly and easily create floor plans to be extremely useful. Estate agents, interior designers and event organisers would all no doubt find such a capability to be extremely valuable. For those users, the new feature added to iStaging’s VR Maker app might be of considerable interest.
The new VR Maker feature utilises Apple’s ARKit toolset to recognise spaces, such as walls and floors and can provide accurate measurements. By scanning each wall of a space, a floor plan can be produced quickly and easily.
I’ve interviewed nine investors who have provided their insights on where the VR industry has come, as well as the risks and opportunities that exist in 2018 and beyond. We’ve asked them what opportunities are available in the space — and what tips they have for startups.
Augmented reality (AR) hasn’t truly permeated the mainstream consciousness yet, but the technology is swiftly being adopted by global industries. It’ll soon be unsurprising to find a pair of AR glasses strapped to a helmet sitting on the heads of service workers, and RealWear, a company at the forefront on developing these headsets, thinks it’s on the edge of something big.
…
VOICE ACTIVATION
What’s most impressive about the RealWear HMT-1Z1 is how you control it. There’s no touch-sensitive gestures you need to learn — it’s all managed with voice, and better yet, there’s no need for a hotword like “Hey Google.” The headset listens for certain commands. For example, from the home screen just say “show my files” to see files downloaded to the device, and you can go back to the home screen by saying “navigate home.” When you’re looking at documents — like schematics — you can say “zoom in” or “zoom out” to change focus. It worked almost flawlessly, even in a noisy environment like the AWE show floor.
David Scowsill‘s experience in the aviation industry spans over 30 years. He has worked for British Airways, American Airlines, Easy Jet, Manchester Airport, and most recently the World Travel and Tourism Council, giving him a unique perspective on how Augmented and Virtual Reality (AVR) can impact the aviation industry.
These technologies have the power to transform the entire aviation industry, providing benefits to companies and consumers. From check-in, baggage drop, ramp operations and maintenance, to pilots and flight attendants, AVR can accelerate training, improve safety, and increase efficiency.
London-based design studio Marshmallow Laser Feast is using VR to let us reconnect with nature. With headsets, you can see a forest through the eyes of different animals and experience the sensations they feel. Creative Director Ersinhan Ersin took the stage at TNW Conference last week to show us how and why they created the project, titled In the Eyes of the Animal.
Have you already taken a side when it comes to XR wearables? Whether you prefer AR glasses or VR headsets likely depends on the application you need. But wouldn’t it be great to have a device that could perform as both? As XR tech advances, we think crossovers will start popping up around the world.
A Beijing startup called AntVR recently rocketed past its Kickstarter goal for an AR/VR visor. Their product, the Mix, uses tinted lenses to toggle between real world overlay and full immersion. It’s an exciting prospect. But rather than digging into the tech (or the controversy surrounding their name, their marketing, and a certain Marvel character) we’re looking at what this means for how XR devices are developed and sold.
Google is bringing AR tech to its Expeditions app with a new update going live today. Last year, the company introduced its GoogleExpeditions AR Pioneer Program, which brought the app into classrooms across the country; with this launch the functionality is available to all.
Expeditions will have more than 100 AR tours in addition to the 800 VR tours already available. Examples include experiences that let users explore Leonardo Da Vinci’s inventions and ones that let you interact with the human skeletal system.
At four recent VR conferences and events there was a palpable sense that despite new home VR devices getting the majority of marketing and media attention this year, the immediate promise and momentum is in the location-based VR (LBVR) attractions industry. The VR Arcade Conference (April 29th and 30th), VRLA (May 4th and 5th), the Digital Entertainment Group’s May meeting (May 1), and FoIL (Future of Immersive Leisure, May 16th and 17th) all highlighted a topic that suddenly no one can stop talking about: location-based VR (LBVR). With hungry landlords giving great deals for empty retail locations, VRcades, which are inexpensive to open (like Internet Cafes), are popping up all over the country. As a result, VRcade royalties for developers are on the rise, so they are shifting their attention accordingly to shorter experiences optimized for LBVR, which is much less expensive than building a VR app for the home.
Below are some excerpted slides from her presentation…
Also see:
20 important takeaways for learning world from Mary Meeker’s brilliant tech trends – from donaldclarkplanb.blogspot.com by Donald Clark
Excerpt:
Mary Meeker’s slide deck has a reputation of being the Delphic Oracle of tech. But, at 294 slides it’s a lot to take in. Don’t worry, I’ve been through them all. It has tons on economic stuff that is of marginal interest to education and training but there’s plenty to to get our teeth into. We’re not immune to tech trends, indeed we tend to follow in lock-step, just a bit later than everyone else. Among the data are lots of fascinating insights that point the way forward in terms of what we’re likely to be doing over the next decade. So here’s a really quick, top-end summary for folk in the learning game.
“Educational content usage online is ramping fast” with over 1 billion daily educational videos watched. There is evidence that use of the Internet for informal and formal learning is taking off.
World of active learning in higher ed — from universitybusiness.com by Sherrie Negrea Formal and informal learning spaces transforming campuses internationally
Excerpts:
Active learning spaces are cropping up at campuses on nearly every continent as schools transform lecture halls, classrooms and informal study areas into collaborative technology hubs. While many international campuses have just started to create active learning spaces, others have been developing them for more than a decade.
As the trend in active learning classrooms has accelerated internationally, colleges in the U.S. can learn from the cutting-edge classroom design and technology that countries such as Australia and Hong Kong have built.
“There are good examples that are coming out from all over the world using different kinds of space design and different types of teaching,” says D. Christopher Brooks, director of research at Educause, who has conducted research on active learning spaces in the United States and China.
“If the students are engaged and motivated and enjoying their learning, they’re more likely to have improved learning outcomes,” says Neil Morris, director of digital learning at the University of Leeds. “And the evidence suggests that these spaces improve their engagement, motivation and enjoyment.”
China is rife with face-scanning technology worthy of Black Mirror. Don’t even think about jaywalking in Jinan, the capital of Shandong province. Last year, traffic-management authorities there started using facial recognition to crack down. When a camera mounted above one of 50 of the city’s busiest intersections detects a jaywalker, it snaps several photos and records a video of the violation. The photos appear on an overhead screen so the offender can see that he or she has been busted, then are cross-checked with the images in a regional police database. Within 20 minutes, snippets of the perp’s ID number and home address are displayed on the crosswalk screen. The offender can choose among three options: a 20-yuan fine (about $3), a half-hour course in traffic rules, or 20 minutes spent assisting police in controlling traffic. Police have also been known to post names and photos of jaywalkers on social media.
…
The technology’s veneer of convenience conceals a dark truth: Quietly and very rapidly, facial recognition has enabled China to become the world’s most advanced surveillance state. A hugely ambitious new government program called the “social credit system” aims to compile unprecedented data sets, including everything from bank-account numbers to court records to internet-search histories, for all Chinese citizens. Based on this information, each person could be assigned a numerical score, to which points might be added for good behavior like winning a community award, and deducted for bad actions like failure to pay a traffic fine. The goal of the program, as stated in government documents, is to “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.”
The Space Satellite Revolution Could Turn Earth into a Surveillance Nightmare — from scout.ai by Becky Ferreira Laser communication between satellites is revolutionizing our ability to track climate change, manage resources, and respond to natural disasters. But there are downsides to putting Earth under a giant microscope.
Excerpts:
And while universal broadband has the potential to open up business and education opportunities to hundreds of thousands of people, it’s the real-time satellite feeds of earth that may have both the most immediate and widespread financial upsides — and the most frightening surveillance implications — for the average person here on earth.
…
Among the industries most likely to benefit from laser communications between these satellites are agriculture and forestry.
…
Satellite data can also be used to engage the public in humanitarian efforts. In the wake of Typhoon Haiyan, DigitalGlobe launched online crowdsourcing campaigns to map damage and help NGOs respond on the ground. And they’ve been identifying vulnerable communities in South Sudan as the nation suffers through unrest and famine.
In an age of intensifying natural disasters, combining these tactics with live satellite video feeds could mean the difference between life and death for thousands of people.
…
Should a company, for example, be able to use real-time video feeds to track your physical location, perhaps in order to better target advertising? Should they be able to use facial recognition and sentiment analysis algorithms to assess your reactions to those ads in real time?
…
While these commercially available images aren’t yet sharp enough to pick up intimate details like faces or phone screens, it’s foreseeable that regulations will be eased to accommodate even sharper images. That trend will continue to prompt privacy concerns, especially if a switch to laser-based satellite communication enables near real-time coverage at high resolutions.
A kaleidoscopic swirl of possible futures confronts us, filled with scenarios where law enforcement officials could rewind satellite footage to identify people at a crime scene, or on a more familial level, parents could remotely watch their kids — or keep tabs on each other — from space. In that world, it’s not hard to imagine privacy becoming even more of a commodity, with wealthy enclaves lobbying to be erased from visual satellite feeds, in a geospatial version of “gated communities.”
From DSC: The pros and cons of technologies…hmmm…this article nicely captures the pluses and minuses that societies around the globe need to be aware of, struggle with, and discuss with each other. Some exciting things here, but some disturbing things here as well.
The rise of China as an AI superpower isn’t a big deal just for China. The competition between the US and China has sparked intense advances in AI that will be impossible to stop anywhere. The change will be massive, and not all of it good. Inequality will widen. As my Uber driver in Cambridge has already intuited, AI will displace a large number of jobs, which will cause social discontent. Consider the progress of Google DeepMind’s AlphaGo software, which beat the best human players of the board game Go in early 2016. It was subsequently bested by AlphaGo Zero, introduced in 2017, which learned by playing games against itself and within 40 days was superior to all the earlier versions. Now imagine those improvements transferring to areas like customer service, telemarketing, assembly lines, reception desks, truck driving, and other routine blue-collar and white-collar work. It will soon be obvious that half of our job tasks can be done better at almost no cost by AI and robots. This will be the fastest transition humankind has experienced, and we’re not ready for it.
… And finally, there are those who deny that AI has any downside at all—which is the position taken by many of the largest AI companies. It’s unfortunate that AI experts aren’t trying to solve the problem. What’s worse, and unbelievably selfish, is that they actually refuse to acknowledge the problem exists in the first place.
These changes are coming, and we need to tell the truth and the whole truth. We need to find the jobs that AI can’t do and train people to do them. We need to reinvent education. These will be the best of times and the worst of times. If we act rationally and quickly, we can bask in what’s best rather than wallow in what’s worst.
From DSC: If a business has a choice between hiring a human being or having the job done by a piece of software and/or by a robot, which do you think they’ll go with? My guess? It’s all about the money — whichever/whomever will be less expensive will get the job.
However, that way of thinking may cause enormous social unrest if the software and robots leave human beings in the (job search) dust.Do we, as a society, win with this way of thinking? To me, it’s capitalism gone astray. We aren’t caring enough for our fellow members of the human race, people who have to put bread and butter on their tables. People who have to support their families. People who want to make solid contributions to society and/or to pursue their vocation/callings — to have/find purpose in their lives.
Others think we’ll be saved by a universal basic income. “Take the extra money made by AI and distribute it to the people who lost their jobs,” they say. “This additional income will help people find their new path, and replace other types of social welfare.” But UBI doesn’t address people’s loss of dignity or meet their need to feel useful. It’s just a convenient way for a beneficiary of the AI revolution to sit back and do nothing.
The CDI algorithm—based on a form of artificial intelligence called machine learning—is at the leading edge of a technological wave starting to hit the U.S. health care industry. After years of experimentation, machine learning’s predictive powers are well-established, and it is poised to move from labs to broad real-world applications, said Zeeshan Syed, who directs Stanford University’s Clinical Inference and Algorithms Program.
“The implications of machine learning are profound,” Syed said. “Yet it also promises to be an unpredictable, disruptive force—likely to alter the way medical decisions are made and put some people out of work.
Meticulous research, deep study of case law, and intricate argument-building—lawyers have used similar methods to ply their trade for hundreds of years. But they’d better watch out, because artificial intelligence is moving in on the field.
As of 2016, there were over 1,300,000 licensed lawyers and 200,000 paralegals in the U.S. Consultancy group McKinsey estimates that 22 percent of a lawyer’s job and 35 percent of a law clerk’s job can be automated, which means that while humanity won’t be completely overtaken, major businesses and career adjustments aren’t far off (see “Is Technology About to Decimate White-Collar Work?”). In some cases, they’re already here.
“If I was the parent of a law student, I would be concerned a bit,” says Todd Solomon, a partner at the law firm McDermott Will & Emery, based in Chicago. “There are fewer opportunities for young lawyers to get trained, and that’s the case outside of AI already. But if you add AI onto that, there are ways that is advancement, and there are ways it is hurting us as well.”
So far, AI-powered document discovery tools have had the biggest impact on the field. By training on millions of existing documents, case files, and legal briefs, a machine-learning algorithm can learn to flag the appropriate sources a lawyer needs to craft a case, often more successfully than humans. For example, JPMorgan announced earlier this year that it is using software called Contract Intelligence, or COIN, which can in seconds perform document review tasks that took legal aides 360,000 hours.
…
People fresh out of law school won’t be spared the impact of automation either. Document-based grunt work is typically a key training ground for first-year associate lawyers, and AI-based products are already stepping in. CaseMine, a legal technology company based in India, builds on document discovery software with what it calls its “virtual associate,” CaseIQ. The system takes an uploaded brief and suggests changes to make it more authoritative, while providing additional documents that can strengthen a lawyer’s arguments.
CIOs are struggling to accelerate deployment of artificial intelligence (AI). A recent Gartner survey of global CIOs found that only 4% of respondents had deployed AI. However, the survey also found that one-fifth of the CIOs are already piloting or planning to pilot AI in the short term.
Such ambition puts these leaders in a challenging position. AI efforts are already stressing staff, skills, and the readiness of in-house and third-party AI products and services. Without effective strategic plans for AI, organizations risk wasting money, falling short in performance and falling behind their business rivals.
“Pursue small-scale plans likely to deliver small-scale payoffs that will offer lessons for larger implementations”
“AI is just starting to become useful to organizations but many will find that AI faces the usual obstacles to progress of any unproven and unfamiliar technology,” says Whit Andrews, vice president and distinguished analyst at Gartner. “However, early AI projects offer valuable lessons and perspectives for enterprise architecture and technology innovation leaders embarking on pilots and more formal AI efforts.”
So what lessons can we learn from these early AI pioneers?
What to do about that? The report’s main recommendation is that people and companies developing AI technology discuss safety and security more actively and openly—including with policymakers. It also asks AI researchers to adopt a more paranoid mindset and consider how enemies or attackers might repurpose their technologies before releasing them.
WSJ: What about adults who are already in the workforce?
DR. AOUN: Society has to provide ways, and higher education has to provide ways, for people to re-educate themselves, reskill themselves or upskill themselves.
That is the part that I see that higher education has not embraced. That’s where there is an enormous opportunity. We look at lifelong learning in higher education as an ancillary operation, as a second-class operation in many cases. We dabble with it, we try to make money out of it, but we don’t embrace it as part of our core mission.
Amazon loves to use the word flywheel to describe how various parts of its massive business work as a single perpetual motion machine. It now has a powerful AI flywheel, where machine-learning innovations in one part of the company fuel the efforts of other teams, who in turn can build products or offer services to affect other groups, or even the company at large. Offering its machine-learning platforms to outsiders as a paid service makes the effort itself profitable—and in certain cases scoops up yet more data to level up the technology even more.
Extended Reality (XR) Extended Reality (XR) is a newly added term to the dictionary of the technical words. For now, only a few people are aware of XR. Extended Reality refers to all real-and-virtual combined environments and human-machine interactions generated by computer technology and wearables. Extended Reality includes all its descriptive forms like the Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR). In other words, XR can be defined as an umbrella, which brings all three Reality (AR, VR, MR) together under one term, leading to less public confusion. Extended reality provides a wide variety and vast number of levels in the Virtuality of partially sensor inputs to Immersive Virtuality.
Since past few years, we have been talking regarding AR, VR, and MR, and probably in coming years, we will be speaking about XR.
Summary: VR is immersing people into a completely virtual environment ; AR is creating an overlay of virtual content, but can’t interact with the environment; MR is a mixed of virtual reality and the reality, it creates virtual objects that can interact with the actual environment. XR brings all three Reality (AR, VR, MR) together under one term.
Audi has released a new AR smartphone application that is triggered by their TV commercials. The app brings the cars from the commercial out of the screen and into your living room or driveway.
According to a release from the company, the Audi quattro coaster AR application “recognizes” specific Audi TV commercials. If the right commercial is playing, it will then trigger a series of AR events.
From DSC: How might this type of setup be used for learning-related applications?
Will Augmented and Virtual Reality Replace Textbooks? — from centerdigitaled.com by Michael Mathews Students who are conceptual and visual learners can grasp concepts through AVR, which in turn allows textbooks to make sense.
Excerpt:
This past year, Tulsa TV-2, an NBC News affiliate, did a great story on the transition in education through the eyes of professors and students who are using augmented and virtual reality. As you watch the news report you will notice the following:
Professors will quickly embrace technology that directly impacts student success.
Students are more engaged and learn quicker through visual stimulation.
Grades can be immediately improved with augmented and virtual reality.
An international and global reach is possible with stimulating technology.
Within the food industry, AR and VR have also begun to make headway. Although development costs are still high, more and more F&B businesses are beginning to realize the potential of AR/VR and see it as a worthwhile investment. Three main areas – human resources, customer experiences, food products – have seen the most concentration of AR/VR development so far and will likely continue to push the envelope on what use cases AR & VR have within the industry.
Hologram-like 3D images offer new ways to study educational models in science and other subjects. zSpace has built a tablet that uses a stylus and glasses to allow students to have interactive learning experiences. Technology like this not only makes education more immersive and captivating, but also can provide more accurate models for students in professional fields like medicine.
Just days after previewing its augmented reality content strategy, the Times has already delivered on its promise to unveil its first official AR coverage, centered on the 2018 Winter Olympic Games in PyeongChang. When viewed through the NYTimes app for iPhones and iPads, the “Four of the World’s Best Olympians, as You’ve Never Seen Them Before” article displays AR content embedded at regular intervals as readers scroll along.
Retail IT is still in its infancy and is yet to become general practice, but given the popularity of video, the immersive experience will undoubtedly catch on. The explanation lies in the fact that the wealth of information and the extensive range of products on offer are overwhelming for consumers. Having the opportunity to try products by touching a button in an environment that feels real is what can make the shopping experience more animated and less stressful. Also, through VR, even regular customers can experience VIP treatment at no additional cost. Sitting in the front row at the Paris Fashion Week without leaving your local mall or, soon, your own house, will become the norm.
In a new study that is optimistic about automation yet stark in its appraisal of the challenge ahead, McKinsey says massive government intervention will be required to hold societies together against the ravages of labor disruption over the next 13 years. Up to 800 million people—including a third of the work force in the U.S. and Germany—will be made jobless by 2030, the study says.
The bottom line: The economy of most countries will eventually replace the lost jobs, the study says, but many of the unemployed will need considerable help to shift to new work, and salaries could continue to flatline.“It’s a Marshall Plan size of task,” Michael Chui, lead author of the McKinsey report, tells Axios.
In the eight-month study, the McKinsey Global Institute, the firm’s think tank, found that almost half of those thrown out of work—375 million people, comprising 14% of the global work force—will have to find entirely new occupations, since their old one will either no longer exist or need far fewer workers. Chinese will have the highest such absolute numbers—100 million people changing occupations, or 12% of the country’s 2030 work force.
I asked Chui what surprised him the most of the findings. “The degree of transition that needs to happen over time is a real eye opener,” he said.
The transition compares to the U.S. shift from a largely agricultural to an industrial-services economy in the early 1900s forward. But this time, it’s not young people leaving farms, but mid-career workers who need new skills.
From DSC: Higher education — and likely (strictly) vocational training outside of higher ed — is simply not ready for this! MAJOR reinvention will be necessary, and as soon as 2018 according to Forrester Research.
One of the key values that institutions of traditional higher education can bring to the table is to help people through this gut wrenching transition — identifying which jobs are going to last for the next 5-10+ years and which ones won’t, and then be about the work of preparing the necessary programs quickly enough to meet the demands of the new economy.
Students/entrepreneurs out there, they say you should look around to see where the needs are and then develop products and/or services to meet those needs. Well, here you go!
As a member of the International Education Committee, at edX we are extremely aware of the changing nature of work and jobs. It is predicted that 50 percent of current jobs will disappear by 2030.
Anant Agarwal, CEO and Founder of edX, and Professor of
Electrical Engineering and Computer Science at MIT (source)
A new report predicts that by 2030, as many as 800 million jobs could be lost worldwide to automation. The study, compiled by the McKinsey Global Institute, says that advances in AI and robotics will have a drastic effect on everyday working lives, comparable to the shift away from agricultural societies during the Industrial Revolution. In the US alone, between 39 and 73 million jobs stand to be automated — making up around a third of the total workforce.
If a computer can do one-third of your job, what happens next? Do you get trained to take on new tasks, or does your boss fire you, or some of your colleagues? What if you just get a pay cut instead? Do you have the money to retrain, or will you be forced to take the hit in living standards?