Aside from AWS, Amazon Alexa-enabled devices were the top-selling products across all categories on Amazon.com throughout the holiday season and the company is reporting that Echo family sales are up over 9x compared to last season. Amazon aims to brand Alexa as a platform, something that has helped the product to gain capabilities faster than its competition. Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter.
Alexa got 4,000 new skills in just the last quarter!
From DSC:
What are the teaching & learning ramifications of this?
By the way, I’m not saying for professors, teachers, & trainers to run for the hills (i.e., that they’ll be replaced by AI-based tools). But rather, I would like to suggest that we not only put this type of thing on our radars, but we should begin to actively experiment with such technologies to see if they might be able to help us do some heavy lifting for students learning about new topics.
From DSC: The following article reminded me of a vision that I’ve had for the last few years…
How to Build a Production Studio for Online Courses— from campustechnology.com by Dian Schaffhauser At the College of Business at the University of Illinois, video operations don’t come in one size. Here’s how the institution is handling studio setup for MOOCs, online courses, guest speakers and more.
Though I’m a huge fan of online learning, why only build a production studio that’s meant to support online courses only? Let’s take it a step further and design a space that can address the content development for online learning as well as for blended learning — which can include the flipped classroom type of approach.
To do so, colleges and universities need to build something akin to what the National University of Singapore has done. I would like to see institutions create large enough facilities in order to house multiple types of recording studios in each one of them. Each facility would feature:
One room that has a lightboard and a mobile whiteboard in it — let the faculty member choose which surface that they want to use
Another room that has a Microsoft Surface Hub or a similar interactive, multitouch device
A recording booth with a nice, powerful, large iMac that has ScreenFlow on it. The booth would also include a nice, professional microphone, a pop filter, sound absorbing acoustical panels, and more. Blackboard Collaborate could be used here as well…especially with the Application Sharing feature turned on and/or just showing one’s PowerPoint slides — with or without the video of the faculty member…whatever they prefer.
Another recording booth with a PC and Adobe Captivate, Camtasia Studio, Screencast-O-Matic, or similar tools. The booth would also include a nice, professional microphone, a pop filter, sound absorbing acoustical panels, and more. Blackboard Collaborate could be used here as well…especially with the Application Sharing feature turned on and/or just showing one’s PowerPoint slides — with or without the video of the faculty member…whatever they prefer.
Another recording booth with an iPad tablet and apps loaded on it such asExplain Everything:
A large recording studio that is similar to what’s described inthe article— a room that incorporates a full-width green screen, with video monitors, a tablet, a podium, several cameras, high-end mics and more. Or, if the budget allows for it, a really high end broadcasting/recording studiolike what Harvard Business school is using:
Don’t discount the game-changing power of the morphing “TV” when coupled with artificial intelligence (AI), natural language processing (NLP), and blockchain-based technologies!
When I saw the article below, I couldn’t help but wonder what (we currently know of as) “TVs” will morph into and what functionalities they will be able to provide to us in the not-too-distant future…?
For example, the article mentions that Seiki, Westinghouse, and Element will be offering TVs that can not only access Alexa — a personal assistant from Amazon which uses artificial intelligence — but will also be able to provide access to over 7,000 apps and games via the Amazon Fire TV Store.
Some of the questions that come to my mind:
Why can’t there be more educationally-related games and apps available on this type of platform?
Why can’t the results of the assessments taken on these apps get fed into cloud-based learner profiles that capture one’s lifelong learning? (#blockchain)
When will potential employers start asking for access to such web-based learner profiles?
Will tvOS and similar operating systems expand to provide blockchain-based technologies as well as the types of functionality we get from our current set of CMSs/LMSs?
Will this type of setup become a major outlet for competency-based education as well as for corporate training-related programs?
Will augmented reality (AR), virtual reality (VR), and mixed reality (MR) capabilities come with our near future “TVs”?
Will virtual tutoring be one of the available apps/channels?
Will the microphone and the wide angle, HD camera on the “TV” be able to be disconnected from the Internet for security reasons? (i.e., to be sure no hacker is eavesdropping in on their private lives)
The TVs will not only have access to Alexa via a microphone-equipped remote but, more importantly, will have access to the over 7,000 apps and games available on the Amazon Fire TV Store – a huge boon considering that most of these Smart TVs usually include, at max, a few dozen apps.
“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.
EdSurge profiles the growth of massive online open courses in 2016, which attracted more than 58 million students in over 700 colleges and universities last year.
The top three MOOC providers — Coursera, Udacity and EdX — collectively grossed more than $100 million last year, as much of the content provided on these platforms shifted from free to paywall guarded materials.
Many MOOCs have moved to offering credentialing programs or nanodegree offerings to increase their value in industrial marketplaces.
Today, we’re at the Windows Hardware Engineering Community event (WinHEC) in Shenzhen, China –where our OEM partners have created more than 300 Windows devices shipping in 75 countries generating more than 8 billion RMB in revenue for Shenzhen partners. We continue this journey with Intel, Qualcomm and hardware engineering creators from around the world. Together, we will build the next generation of modern PCs supporting mixed reality, gaming, advanced security, and artificial intelligence; make mixed reality mainstream; and introduce always-connected, more power efficient cellular PCs running Windows 10.
Nearly 140 private companies working to advance artificial intelligence technologies have been acquired since 2011, with over 40 acquisitions taking place in 2016 alone. Corporate giants like Google, IBM, Yahoo, Intel, Apple and Salesforce, are competing in the race to acquire private AI companies, with Samsung emerging as a new entrant in October with its acquisition of startup Viv Labs, which is developing a Siri-like AI assistant, and GE making 2 AI acquisitions in November.
From DSC: When I saw the article below, I couldn’t help but wonder…what are the teaching & learning-related ramifications when new “skills” are constantly being added to devices like Amazon’s Alexa?
What does it mean for:
Students / learners
Faculty members
Teachers
Trainers
Instructional Designers
Interaction Designers
User Experience Designers
Curriculum Developers
…and others?
Will the capabilities found in Alexa simply come bundled as a part of the “connected/smart TV’s” of the future? Hmm….
Amazon’s Alexa has gained many skills over the past year, such as being able to read tweets or deliver election results and fantasy football scores. Starting on Wednesday, you’ll be able to ask Alexa about Mars.
The new skill for the voice-controlled speaker comes courtesy of NASA’s Jet Propulsion Laboratory. It’s the first Alexa app from the space agency.
Tom Soderstrom, the chief technology officer at NASA’s Jet Propulsion Laboratory was on hand at the AWS re:invent conference in Las Vegas tonight to make the announcement.
Amazon today announced three new artificial intelligence-related toolkits for developers building apps on Amazon Web Services
At the company’s AWS re:invent conference in Las Vegas, Amazon showed how developers can use three new services — Amazon Lex, Amazon Polly, Amazon Rekognition — to build artificial intelligence features into apps for platforms like Slack, Facebook Messenger, ZenDesk, and others.
The idea is to let developers utilize the machine learning algorithms and technology that Amazon has already created for its own processes and services like Alexa.Instead of developing their own AI software, AWS customers can simply use an API call or the AWS Management Console to incorporate AI features into their own apps.
AWS Announces Three New Amazon AI Services Amazon Lex, the technology that powers Amazon Alexa, enables any developer to build rich, conversational user experiences for web, mobile, and connected device apps; preview starts today
Amazon Polly transforms text into lifelike speech, enabling apps to talk with 47 lifelike voices in 24 languages
Amazon Rekognition makes it easy to add image analysis to applications, using powerful deep learning-based image and face recognition
Capital One, Motorola Solutions, SmugMug, American Heart Association, NASA, HubSpot, Redfin, Ohio Health, DuoLingo, Royal National Institute of Blind People, LingApps, GoAnimate, and Coursera are among the many customers using these Amazon AI Services
Excerpt:
SEATTLE–(BUSINESS WIRE)–Nov. 30, 2016– Today at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced three Artificial Intelligence (AI) services that make it easy for any developer to build apps that can understand natural language, turn text into lifelike speech, have conversations using voice or text, analyze images, and recognize faces, objects, and scenes. Amazon Lex, Amazon Polly, and Amazon Rekognition are based on the same proven, highly scalable Amazon technology built by the thousands of deep learning and machine learning experts across the company. Amazon AI services all provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective. Amazon AI services are fully managed services so there are no deep learning algorithms to build, no machine learning models to train, and no up-front commitments or infrastructure investments required. This frees developers to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around them.
From an email from Elliott Masie and the Masie Center:
This 35-page eBook is packed with content, context, conversations, video links, and curated resources that include:
Learning Perspectives from Anderson Cooper, Scott Kelly, Tiffany Shlain, George Takei, Richard Culatta, Karl Kapp, Nancy DeViney, and other Learning 2016 Keynotes
Graphic Illustrations from Deirdre Crowley, Crowley & Co.
Video Links for Content Segments
Learning Perspectives from Elliott Masie
Segments focusing on:
Brain & Cognitive Science
Gamification & Gaming
Micro-Learning
Visual Storytelling
Connected & Flipped Classrooms
Compliance & Learning
Engagement in Virtual Learning
Video & Learning
Virtual Reality & Learning
And much more!
We have created this as an open source, shareable resource that will extend the learning from Learning 2016 to our colleagues around the world. We are using the Open Creative Commons license, so feel free to share!
We believe that CURATION, focusing on extending and organizing follow-up content, is a growing and critical dimension of any learning event. We hope that you find your eBook of value!
From DSC: If we had more beacons on our campus (a Christian liberal arts college), I could see where we could offer a variety of things in new ways:
For example, we might use beacons around the main walkways of our campus where, when we approach these beacons, pieces of advice or teaching could appear on an app on our mobile devices. Examples could include:
Micro-tips on prayer from John Calvin, Martin Luther, or Augustine (i.e., 1 or 2 small tips at a time; could change every day or every week)
Or, for a current, campus-wide Bible study, the app could show a question for that week’s study; you could reflect on that question as you’re walking around
Or, for musical events…when one walks by the Covenant Fine Arts Center, one could get that week’s schedule of performances or what’s currently showing in the Art Gallery
Peaces of scripture, with links to Biblegateway.com or other sites
Further information re: what’s being displayed on posters within the hallways — works that might be done by faculty members and/or by students
Etc.
A person could turn the app’s notifications on or off at any time. The app would encourage greater exercise; i.e., the more you walk around, the more tips you get.
Blockchain-based credentials may catapult credentialing movement— from ecampusnews.com by Meris Stansbury Carnegie Mellon, MIT Media Lab, and Learning Machine host groundbreaking conversation about open standards for blockchain credentialing in higher education and beyond.
Excerpt (emphasis DSC):
Leaders from Learning Machine, MIT Media Lab, and Carnegie Mellon University engaged in a groundbreaking conversation with a packed house of EdTech vendors and education leaders at the annual EDUCAUSE conference. Together, they introduced Blockcerts, the open standard for issuing secure, verifiable digital credentials.
Hosted by Learning Machine CEO, Chris Jagers, the panel brought together research from the MIT Media Lab (Principal Engineer Kim Duffy), real-world perspective from the Registrar of Carnegie Mellon University (John Papinchak), implementation details from Learning Machine leadership (COO Dan Hughes), and the societal implications of distributed technologies (Learning Machine Anthropologist Natalie Smolenski). The panelists described a future in which learners are able to act as their own lifelong registrars with blockchain credentialing.
Before we dive into details that technology, let’s cover some background. Even though schools moved from sheepskin to digital records a while ago, schools are still acting as the sole record keepers for student information. If a student wants to access or share their official records, they have to engage in a slow, complicated, and often expensive process. And so, for the most part, those records aren’t used much after graduation, nor built upon.
Additionally, education is changing. Online learning and competency-based programs are rising in popularity. And this is magnified by a rapidly growing number of accredited education providers that expand far beyond traditional schools. This is causing a proliferation of educational claims that are hard to manage and it raises many new questions, both in terms of policy and technology. And what I hope to explain today is how a new technical infrastructure has emerged that enables students to be part of the solution by acting as their own lifelong registrar.
From DSC: How long before recommendation engines like this can be filtered/focused down to just display apps, channels, etc. that are educational and/or training related (i.e., a recommendation engine to suggest personalized/customized playlists for learning)?
That is, in the future, will we have personalized/customized playlists for learning on our Apple TVs — as well as on our mobile devices — with the assessment results of our taking the module(s) or course(s) being sent in to:
A credentials database on LinkedIn (via blockchain) and/or
A credentials database at the college(s) or university(ies) that we’re signed up with for lifelong learning (via blockchain)
and/or
To update our cloud-based learning profiles — which can then feed a variety of HR-related systems used to find talent? (via blockchain)
Will participants in MOOCs, virtual K-12 schools, homeschoolers, and more take advantage of learning from home?
Will solid ROI’s from having thousands of participants paying a smaller amount (to take your course virtually) enable higher production values?
Will bots and/or human tutors be instantly accessible from our couches?
These affordances are just now starting to be uncovered as machines are increasingly able to ascertain patterns, things, objects…even people (which calls for a separate posting at some point).
But mainly, for today, I wanted to highlight an excellent comment/reply from Nikos Andriotis @ Talent LMS who gave me permission to highlight hissolid reflections and ideas:
From DSC: Excellent reflection/idea Nikos — that would represent some serious personalized, customized learning!
Nikos’ innovative reflections also made me think about his ideas in light of their interaction or impact with web-based learner profiles, credentialing, badging, and lifelong learning. What’s especially noteworthy here is that the innovations (that impact learning) continue to occur mainly in the online and blended learning spaces.
How might the ramifications of these innovations impact institutions who are pretty much doing face-to-face only (in terms of their course delivery mechanisms and pedagogies)?
Given:
That Microsoft purchased LinkedIn and can amass a database of skills and open jobs (playing a cloud-based matchmaker)
Everyday microlearning is key to staying relevant (RSS feeds and tapping into “streams of content” are important here, and so is the use of Twitter)
Teachers at Coppell Independent School District have become the first to use a new IBM and Apple technology platform built to aid personalized learning.
IBM Watson Element for Educators pairs IBM analytics and data tools such as cognitive computing with Apple design. It integrates student grades, interests, participation, and trends to help educators determine how a student learns best, the company says.
…
It also recommends learning content personalized to each student. The platform might suggest a reading assignment on astronomy for a young student who has shown an interest in space.
From DSC: Technologies involved with systems like IBM’s Watson will likely bring some serious impact to the worlds of education and training & development. Such systems — and the affordances that they should be able to offer us — should not be underestimated. The potential for powerful, customized, personalized learning could easily become a reality in K-20 as well as in the corporate training space. This is an area to keep an eye on for sure, especially with the growing influence of cognitive computing and artificial intelligence.
These kinds of technology should prove helpful in suggesting modules and courses (i.e., digital learning playlists), but I think the more powerful systems will be able to drill down far more minutely than that. I think these types of systems will be able to assist with all kinds of math problems and equations as well as analyze writing examples, correct language mispronunciations, and more (perhaps this is already here…apologies if so). In other words, the systems will “learn” where students can go wrong doing a certain kind of math equation…and then suggest steps to correct things when the system spots a mistake (or provide hints at how to correct mistakes).
This road takes us down to places where we have:
Web-based learner profiles — including learner’s preferences, passions, interests, skills
Microlearning/badging/credentialing — likely using blockchain
Learning agents/bots to “contact” for assistance
Guidance for lifelong learning
More choice, more control
Also see:
First IBM Watson Education App for iPad Delivers Personalized Learning for K-12 Teachers and Students — from prnewswire.com Educators at Coppell Independent School District in Texas first to use new iPad app to tailor learning experiences to student’s interests and aptitudes
Excerpts: With increasing demands on educators, teachers need tools that will enable them to better identify the individual needs of all students while designing learning experiences that engage and hold the students’ interest as they master the content. This is especially critical given that approximately one third of American students require remedial education when they enter college today, and current college attainment rates are not keeping pace with the country’s projected workforce needs1. A view of academic and day-to-day updates in real time can help teachers provide personalized support when students need it.
…
IBM Watson Element provides teachers with a holistic view of each student through a fun, easy-to-use and intuitive mobile experience that is a natural extension of their work. Teachers can get to know their students beyond their academic performance, including information about personal interests and important milestones students choose to share. For example, teachers can input notes when a student’s highly anticipated soccer match is scheduled, when another has just been named president for the school’s World Affairs club, and when another has recently excelled following a science project that sparked a renewed interest in chemistry.The unique “spotlight” feature in Watson Element provides advanced analytics that enables deeper levels of communication between teachers about their students’ accomplishments and progress. For example, if a student is excelling academically, teachers can spotlight that student, praising their accomplishments across the school district. Or, if a student received a top award in the district art show, a teacher can spotlight the student so their other teachers know about it.
From DSC: Consider the affordances that we will soon be experiencing when we combine machine learning — whereby computers “learn” about a variety of things — with new forms of Human Computer Interaction (HCI) — such as Augment Reality (AR)!
The educational benefits — as well as the business/profit-related benefits will certainly be significant!
For example, let’s create a new mobile app called “Horticultural App (ML)” * — where ML stands for machine learning. This app would be made available on iOS and Android-based devices.(Though this is strictly hypothetical, I hope and pray that some entrepreneurial individuals and/or organizations out there will take this idea and run with it!)
Some use cases for such an app:
Students, environmentalists, and lifelong learners will be able to take some serious educationally-related nature walks once they launch the Horticultural App (ML) on their smartphones and tablets!
They simply hold up their device, and the app — in conjunction with the device’s camera — will essentially take a picture of whatever the student is focusing in on. Via machine learning, the app will “recognize” the plant, tree, type of grass, flower, etc. — and will then present information about that plant, tree, type of grass, flower, etc.
In the production version of this app, a textual layer could overlay the actual image of the tree/plant/flower/grass/etc. in the background — and this is where augmented reality comes into play. Also, perhaps there would be an opacity setting that would be user controlled — allowing the learner to fade in or fade out the information about the flower, tree, plant, etc.
Or let’s look at the potential uses of this type of app from some different angles.
Let’s say you live in Michigan and you want to be sure an area of the park that you are in doesn’t have anyEastern Poison Ivyin it — so you launch the app and review any suspicious looking plants. As it turns out, the app identifies some Eastern Poison Ivyfor you (and it could do this regardless of which season we’re talking about, as the app would be able to ascertain the current date and the current GPS coordinates of the person’s location as well, taking that criteria into account).
Or consider another use of such an app:
A homeowner who wants to get rid of a certain kind of weed. The homeowner goes out into her yard and “scans” the weed, and up pops some products at the local Lowe’s or Home Depot that gets rid of that kind of weed.
Assuming you allowed the app to do so, it could launch a relevant chatbot that could be used to answer any questions about the application of the weed-killing product that you might have.
Or consider another use of such an app:
A homeowner has a diseased tree, and they want to know what to do about it.The machine learning portion of the app could identify what the disease was and bring up information on how to eradicate it.
Again, if permitted to do so, a relevant chatbot could be launched to address any questions that you might have about the available treatment options for that particular tree/disease.
Or consider other/similar apps along these lines:
Skin ML (for detecting any issues re: acme, skin cancers, etc.)
Minerals and Stones ML (for identifying which mineral or stone you’re looking at)
Fish ML
Etc.
Image from gettyimages.com
So there will be many new possibilities that will be coming soon to education, businesses, homeowners, and many others to be sure! The combination of machine learning with AR will open many new doors.
2016 has been promoted as the year of virtual reality. In the space of a few months, we have seen brands like Facebook, Samsung and Sony have all come out with VR products of their own. But another closely related industry has been making a growing presence in the tech industry. Augmented reality, or simply AR, is gaining ground among tech companies and even consumers. Google was the first contender for coolest AR product with its Google Glass. Too bad that did not work out; it felt like a product too ahead of its time. Companies like Microsoft, Magic Leap and even Apple are hoping to pick up from where Google left off. They are creating their own smart glasses that will, hopefully, do better than Google Glass. In our article, we look at some of the coolest Augmented Reality smart glasses around.
Some of them are already out while others are in development.
It’s no secret that we here at Labster are pretty excited about VR. However, if we are to successfully introduce VR into education and training we need to know how to create VR simulations that unlock these new great ways of learning.
Christian Jacob and Markus Santoso are trying to re-create the experience of the aforementioned agents in Fantastic Voyage. Working with 3D modelling company Zygote, they and recent MSc graduate Douglas Yuen have created HoloCell, an educational software. Using Microsoft’s revolutionary HoloLens AR glasses, HoloCell provides a mixed reality experience allowing users to explore a 3D simulation of the inner workings, organelles, and molecules of a healthy human cell.
Upload is teaming up with Udacity, Google and HTC to build an industry-recognized VR certification program.
…
According to Udacity representatives, the organization will now be adding a VR track to its “nanodegree”program. Udacity’s nanodegrees are certification routes that can be completed completely online at a student’s own pace. These courses typically take between 6-12 months and cost $199 per month. Students will also receive half of their tuition back if they complete a course within six months. The new VR course will follow this pattern as well.
The VR nanodegree program was curated by Udacity after the organization interviewed dozens of VR savvy companies about the type of skills they look for in a potential new hire. This information was then built into a curriculum through a joint effort between Google, HTC and Upload.
Virtual reality helps Germany catch last Nazi war criminals — from theguardian.com by Agence France-Presse Lack of knowledge no longer an excuse as precise 3D model of Auschwitz, showing gas chambers and crematoria, helps address atrocities
Excerpt:
German prosecutors and police have developed 3D technology to help them catch the last living Nazi war criminals with a highly precise model of Auschwitz.
German prosecutors and police have begun using virtual reality headsets in their quest to bring the last remaining Auschwitz war criminals to justice, AFP reported Sunday.
Using the blueprints of the death camp in Nazi-occupied Poland, Bavarian state crime office digital imaging expert Ralf Breker has created a virtual reality model of Auschwitz which allows judges and prosecutors to mimic moving around the camp as it stood during the Holocaust.
Technology is hoping to turn empathy into action. Or at least, the United Nations is hoping to do so. The intergovernmental organization is more than seven decades old at this point, but it’s constantly finding new ways to better the world’s citizenry. And the latest tool in its arsenal? Virtual reality.
Last year, the UN debuted its United Nations Virtual Reality, which uses the technology to advocate for communities the world over. And more recently, the organization launched an app made specifically for virtual reality films. First debuted at the Toronto International Film Festival, this app encourages folks to not only watch the UN’s VR films, but to then take action by way of donations or volunteer work.
If you’re an Apple user and want an untethered virtual reality system, you’re currently stuck with Google Cardboard, which doesn’t hold a candle to the room scale VR provided by the HTC Vive (a headset not compatible with Macs, by the way). But spatial computing company Occipital just figured out how to use their Structure Core 3D Sensor to provide room scale VR to any smartphone headset—whether it’s for an iPhone or Android.
The Body VR is a great example of how the Oculus Rift and Gear VR can be used to educate as well as entertain. Starting today, it’s also a great example of how the HTC Vive can do the same.
The developers previously released this VR biology lesson for free back at the launch of the Gear VR and, in turn, the Oculus Rift. Now an upgraded version is available on Valve and HTC’s Steam VR headset. You’ll still get the original experience in which you explore the human body, travelling through the bloodstream to learn about blood cells and looking at how organelles work. The piece is narrated as you go.
For a moment, students were taken into another world without leaving the great halls of Harvard. Some students had a great time exploring the ocean floor and saw unique underwater animals, others tried their hand in hockey, while others screamed as they got into a racecar and sped on a virtual speedway. All of them, getting a taste of what virtual and augmented reality looks like.
All of these, of course, were not just about fun but on how especially augmented and virtual reality can transform every kind of industry. This will be discussed and demonstrated at the i-lab in the coming weeks with Rony Abovitz, CEO of Magic Leap Inc., as the keynote speaker.
Abovitz was responsible for developing the “Mixed Reality Lightfield,” a technology that combines augmented and virtual reality. According to Abovitz, it will help those who are struggling to “transfer two-dimensional information or text into “spatial learning.”
“I think it will make life easier for a lot of people and open doors for a lot of people because we are making technology fit how our brains evolved into the physics of the universe rather than forcing our brains to adapt to a more limited technology,” he added.
A new kind of credential has entered the crowded market for online learning.
EdX, a Massachusetts-based nonprofit that provides online courses, announced last week the creation of 19 “MicroMasters” courses, a new type of online educational program. These courses are tailored master’s degree-level classes that can help students hone skills that will be immediately useful in the workplace.
“I think the MicroMasters is a big next step in the evolution of education,” Anant Agarwal, the CEO of edX and an MIT professor, said in an interview last week.
These courses – offered through 14 universities including Columbia, Arizona State University and the University of Michigan, as well as some in Australia, Europe and India – are open to anyone who wants to take them. No transcripts or prerequisites required. Students don’t even need a GED to enroll.
Anyone can learn in the MicroMasters program for free, although those who wish to receive a certificate of completion must pay a $1,000 fee. That money gives the student more than a piece of paper; it also pays for extra services, such as more attention from the instructor.
Also somewhat related/see (emphasis DSC):
An Online Education Breakthrough? A Master’s Degree for a Mere $7,000 — from nytimes.com by Kevin Carey Excerpt:
Georgia Tech rolled out its online master’s in computer science in 2014. It already had a highly selective residential master’s program that cost about the same as those of competitor colleges. Some may see online learning as experimental or inferior, something associated with downmarket for-profit colleges. But the nation’s best universities have fully embraced it. Syracuse, Johns Hopkins, U.S.C. and others have also developed online master’s degrees, for which they charge the same tuition as their residential programs.Georgia Tech decided to do something different. It charges online students the smallest amount necessary to cover its costs. That turned out to be $510 for a three-credit class. U.S.C.chargesonline students $5,535 for a three-credit class. (Both programs also charge small per-semester fees.)
With one of the top 10 computer science departments in the nation, according to U.S. News & World Report, Georgia Tech had a reputation to uphold. So it made the online program as much like the residential program as possible.