What might our learning ecosystems look like by 2025? [Christian]

This posting can also be seen out at evoLLLution.com (where LLL stands for lifelong learning):

DanielChristian-evoLLLutionDotComArticle-7-31-15

 

From DSC:
What might our learning ecosystems look like by 2025?

In the future, learning “channels” will offer more choice, more control.  They will be far more sophisticated than what we have today.

 

MoreChoiceMoreControl-DSC

 

That said, what the most important aspects of online course design end up being 10 years from now depends upon what types of “channels” I think there will be and what might be offered via those channels. By channels, I mean forms, methods, and avenues of learning that a person could pursue and use. In 2015, some example channels might be:

  • Attending a community college, a college or a university to obtain a degree
  • Obtaining informal learning during an internship
  • Using social media such as Twitter or LinkedIn
  • Reading blogs, books, periodicals, etc.

In 2025, there will likely be new and powerful channels for learning that will be enabled by innovative forms of communications along with new software, hardware, technologies, and other advancements. For examples, one could easily imagine:

  • That the trajectory of deep learning and artificial intelligence will continue, opening up new methods of how we might learn in the future
  • That augmented and virtual reality will allow for mobile learning to the Nth degree
  • That the trend of Competency Based Education (CBE) and microcredentials may be catapulted into the mainstream via the use of big data-related affordances

Due to time and space limitations, I’ll focus here on the more formal learning channels that will likely be available online in 2025. In that environment, I think we’ll continue to see different needs and demands – thus we’ll still need a menu of options. However, the learning menu of 2025 will be more personalized, powerful, responsive, sophisticated, flexible, granular, modularized, and mobile.

 


Highly responsive, career-focused track


One part of the menu of options will focus on addressing the demand for more career-focused information and learning that is available online (24×7). Even in 2015, with the U.S. government saying that 40% of today’s workers now have ‘contingent’ jobs and others saying that percentage will continue climbing to 50% or more, people will be forced to learn quickly in order to stay marketable.  Also, the 1/2 lives of information may not last very long, especially if we continue on our current trajectory of exponential change (vs. linear change).

However, keeping up with that pace of change is currently proving to be out of reach for most institutions of higher education, especially given the current state of accreditation and governance structures throughout higher education as well as how our current teaching and learning environment is set up (i.e., the use of credit hours, 4 year degrees, etc.).  By 2025, accreditation will have been forced to change to allow for alternative forms of learning and for methods of obtaining credentials. Organizations that offer channels with a more vocational bent to them will need to be extremely responsive, as they attempt to offer up-to-date, highly-relevant information that will immediately help people be more employable and marketable. Being nimble will be the name of the game in this arena. Streams of content will be especially important here. There may not be enough time to merit creating formal, sophisticated courses on many career-focused topics.

 

StreamsOfContent-DSC

 

With streams of content, the key value provided by institutions will be to curate the most relevant, effective, reliable, up-to-date content…so one doesn’t have to drink from the Internet’s firehose of information. Such streams of content will also offer constant potential, game-changing scenarios and will provide a pulse check on a variety of trends that could affect an industry. Social-based learning will be key here, as learners contribute to each other’s learning. Subject Matter Experts (SMEs) will need to be knowledgeable facilitators of learning; but given the pace of change, true experts will be rare indeed.

Microcredentials, nanodegrees, competency-based education, and learning from one’s living room will be standard channels in 2025.  Each person may have a web-based learner profile by then and the use of big data will keep that profile up-to-date regarding what any given individual has been learning about and what skills they have mastered.

For example, even currently in 2015, a company called StackUp creates their StackUp Report to add to one’s resume or grades, asserting that their services can give “employers and schools new metrics to evaluate your passion, interests, and intellectual curiosity.” Stackup captures, categorizes, and scores everything you read and study online. So they can track your engagement on a given website, for example, and then score the time spent doing so. This type of information can then provide insights into the time you spend learning.

Project teams and employers could create digital playlists that prospective employees or contractors will have to advance through; and such teams and employers will be watching to see how the learners perform in proving their competencies.

However, not all learning will be in the fast lane and many people won’t want all of their learning to be constantly in the high gears. In fact, the same learner could be pursuing avenues in multiple tracks, traveling through their learning-related journeys at multiple speeds.

 


The more traditional liberal arts track


To address these varied learning preferences, another part of the menu will focus on channels that don’t need to change as frequently.  The focus here won’t be on quickly-moving streams of content, but the course designers in this track can take a bit more time to offer far more sophisticated options and activities that people will enjoy going through.

Along these lines, some areas of the liberal arts* will fit in nicely here.

*Speaking of the liberal arts, a brief but important tangent needs to be addressed, for strategic purposes. While the following statement will likely be highly controversial, I’m going to say it anyway.  Online learning could be the very thing that saves the liberal arts.

Why do I say this? Because as the price of higher education continues to increase, the dynamics and expectations of learners continue to change. As the prices continue to increase, so do peoples’ expectations and perspectives. So it may turn out that people are willing to pay a dollar range that ends up being a fraction of today’s prices. But such greatly reduced prices won’t likely be available in face-to-face environments, as offering these types of learning environment is expensive. However, such discounted prices can and could be offered via online-based environments. So, much to the chagrin of many in academia, online learning could be the very thing that provides the type of learning, growth, and some of the experiences that liberal arts programs have been about for centuries. Online learning can offer a lifelong supply of the liberal arts.

But I digress…
By 2025, a Subject Matter Expert (SME) will be able to offer excellent, engaging courses chocked full of the use of:

  • Engaging story/narrative
  • Powerful collaboration and communication tools
  • Sophisticated tracking and reporting
  • Personalized learning, tech-enabled scaffolding, and digital learning playlists
  • Game elements or even, in some cases, multiplayer games
  • Highly interactive digital videos with built-in learning activities
  • Transmedia-based outlets and channels
  • Mobile-based learning using AR, VR, real-world assignments, objects, and events
  • …and more.

However, such courses won’t be able to be created by one person. Their sophistication will require a team of specialists – and likely a list of vendors, algorithms, and/or open source-based tools – to design and deliver this type of learning track.

 


Final reflections


The marketplaces involving education-related content and technologies will likely look different. There could be marketplaces for algorithms as well as for very granular learning modules. In fact, it could be that modularization will be huge by 2025, allowing digital learning playlists to be built by an SME, a Provost, and/or a Dean (in addition to the aforementioned employer or project team).  Any assistance that may be required by a learner will be provided either via technology (likely via an Artificial Intelligence (AI)-enabled resource) and/or via a SME.

We will likely either have moved away from using Learning Management Systems (LMSs) or those LMSs will allow for access to far larger, integrated learning ecosystems.

Functionality wise, collaboration tools will still be important, but they might be mind-blowing to us living in 2015.  For example, holographic-based communications could easily be commonplace by 2025. Where tools like IBM’s Watson, Microsoft’s Cortana, Google’s Deepmind, and Apple’s Siri end up in our future learning ecosystems is hard to tell, but will likely be there. New forms of Human Computer Interaction (HCI) such as Augmented Reality (AR) and Virtual Reality (VR) will likely be mainstream by 2025.

While the exact menu of learning options is unclear, what is clear is that change is here today and will likely be here tomorrow. Those willing to experiment, to adapt, and to change have a far greater likelihood of surviving and thriving in our future learning ecosystems.

 

Part 3: Google Search will be your next brain — from medium.com by Steven Levy
Inside Google’s massive effort in Deep Learning, which could make already-smart search into scary-smart search

Excerpt:

But about ten years ago, in Hinton’s lab at the University of Toronto, he and some other researchers made a breakthrough that suddenly made neural nets the hottest thing in AI. Not only Google but other companies such as Facebook, Microsoft and IBM began frantically pursuing the relatively minuscule number of computer scientists versed in the black art of organizing several layers of artificial neurons so that the entire system could be trained, or even train itself, to divine coherence from random inputs, much in a way that a newborn learns to organize the data pouring into his or her virgin senses. With this newly effective process, dubbed Deep Learning, some of the long-standing logjams of computation (like being able to see, hear, and be unbeatable at Breakout) would finally be untangled. The age of intelligent computers systems?—?long awaited and long feared?—?would suddenly be breathing down our necks. And Google search would work a whole lot better.

This breakthrough will be crucial in Google Search’s next big step: understanding the real world to make a huge leap in accurately giving users the answers to their questions as well as spontaneously surfacing information to satisfy their needs. To keep search vital, Google must get even smarter.

This is very much in character for the Internet giant. From its earliest days, the company’s founders have been explicit that Google is an artificial intelligence company. It uses its AI not just in search?—?though its search engine is positively drenched with artificial intelligence techniques?—?but in its advertising systems, its self-driving cars, and its plans to put nanoparticles in the human bloodstream for early disease detection.

Indeed, as of now, all Google’s deep learning work has yet to make a big mark on Google search or other products. But that’s about to change.

 

Also see the other parts in this series:

Part 1: The never ending search

Excerpt:

Google’s flagship product has been part of our lives for so long that we take it for granted. But Google doesn’t. Part One of a study of Search’s quiet transformation.

 

Part 2: How Google knows what you want to know
Eight times a day Google asks test subjects about their information needs. Their replies can be sobering.

Excerpt:

Google search really isn’t threatened by competition from other search engines. But the people on the search team constantly worry that they may be falling short in satisfying the needs of their users. To address that problem, of course, Google needs to know what those needs are. One way to do this is by examining the logs to see what queries are unsatisfied. But there are lots of things people want to know that they aren’t asking Google about.

How does Google know what those needs are?

It asks them.

Every year since 2011 Google has run an annual study to learn what people really, really want to know, whether it’s something Google provides or not. It’s called Daily Information Needs, but the psychologists at Google involved with the project just call it DIN.

 

Part 4: The Deep Mind of Demis Hassabis — from medium.com by Steven Levy
Google’s prize AI prodigy tells all. In the race to recruit the best AI talent, Google scored a coup by getting the team led by a former video game guru and chess prodigy

Excerpt:

From the day in 2011 that Demis Hassabis co-founded DeepMind—with funding by the likes of Elon Musk—the UK-based artificial intelligence startup became the most coveted target of major tech companies. In June 2014, Hassabis and his co-founders, Shane Legg and Mustafa Suleyman, agreed to Google’s purchase offer of $400 million. Late last year, Hassabis sat down with Backchannel to discuss why his team went with Google—and why DeepMind is uniquely poised to push the frontiers of AI. The interview has been edited for length and clarity.

 

 

 

Addendum on 3/16/15:

 

DeepLearning-Moz-March2015

 

 

Cognitoy-ElementalPath-March2015
CognitoyFramed-March2015

 

 

From DSC:
Given the above…what are the ramifications of that in our/your work?

 

 

Also see:

 

 

A related addendum on 3/11/15
Look at the different expectations of the generations found in this article:

 

A related addendum on 3/17/15:

Excerpt:
The overall goal for DragonBot (which, as far as I can tell, is a common platform used for many different projects) is to develop “personalized learning companions” for children. In other words, MIT is finding ways in which robots like DragonBot can effectively help kids learn.

DragonBot isn’t intended to work like that IBM Watson-based dinosaur robot; it’s not a primary source of knowledge, and it’s not actively teaching a whole bunch of new facts to kids who use it. Rather, DragonBot is intended to help with the process of learning itself, encouraging kids to be interactively engaged in whatever they happen to be learning about.

 

 

MicrosoftProductivityVision2015

 

Example snapshots from
Microsoft’s Productivity Future Vision

 

 

MicrosoftProductivityVision2-2015

 

MicrosoftProductivityVision3-2015

 

MicrosoftProductivityVision5-2015

 

MicrosoftProductivityVision6-2015

 

MicrosoftProductivityVision7-2015

 

MicrosoftProductivityVision8-2015

 

MicrosoftProductivityVision4-2015

 

 

 

A vision for radically personalized learning | Katherine Prince | TEDxColumbus

Description:

Could we transform today’s outmoded education system to a vibrant learning ecosystem that puts learners at the center and enables many right combinations of learning resources, experiences, and supports to help each child succeed? Creating personalized learning for all young people will require a paradigm shift in education and a deep commitment to providing each student with the right experiences at the right time.

As Senior Director of Strategic Foresight at KnowledgeWorks, Katherine Prince leads the organization’s work on the future of learning. Since 2007, she has helped a wide range of education stakeholders translate KnowledgeWorks’ future forecasts into forward-looking visions and develop strategies for bringing those visions to life. She also writes about what trends shaping the future of learning could mean for the learning ecosystem.

 

Learning Ecosystems mentioned again2

 

Context-Evernote

 

Excerpt from Context: Your Work Enriched by the Smartest Minds — from blog.evernote.com

Good research happens in three phases. You recall what you know. You consult with someone. You search external sources. We’re applying our machine learning and augmented intelligence expertise to present you with all three research phases automatically, at the moment you need them, without ever leaving your workspace. As you work, Evernote is automatically looking for other information and content that might help you connect the dots/see the big picture. This content can take the form of other notes, people you might talk to or even relevant news sources.

In Evernote, every phrase informs our algorithms about what other content might help you further your project. We call this Context. It’s an extremely powerful new Premium feature coming soon to Evernote.

Your knowledge

Your team’s knowledge

Your network

The professionals: Possibly the most powerful new benefit that Context brings is a look at related information from premier news and information sources, including…

  • The Wall Street Journal
  • Factiva
  • LinkedIn
  • TechCrunch
  • CrunchBase
  • Fast Company
  • Inc. Magazine
  • PandoDaily

 

Also see:

  • Evernote’s CEO: Siri and wearables are doing it wrong — from engadget.com by Devindra Hardawar; with thanks to Mr. Emory Craig for posting this on Twitter
    Excerpt:
    You can see this methodology in place with Context, the new Evernote feature that fetches articles related to your work. Links automatically appear at the bottom of your notes as you’re typing, alongside your past notes and those from your coworkers.

    When you talk about anticipatory computing, it’s only a matter of time until the broader notion of augmented intelligence comes up.

    There are already glimpses of it in Google Now, which is more of an anticipatory notification platform than a friendly assistant like Siri.
 

LearningNowTV-Nov2014

 


From their website:
(emphasis DSC)

LEARNING NOW tv is a live-streamed internet tv channel bringing you inspirational interviews, debates and round tables, and advice and guidance on real world issues to keep you up-to date in the world of learning and development.

Membership to the channel is FREE. You will be able to interact with us on our social channel during the live stream as well as having a resource of the recorded programmes to refer to throughout the year.

Learning Now tv is run and produced by some of the L&D world’s leading experts who have many years’ experience of reporting the real-world issues for today’s learning and development professionals.

 

I originally saw this at Clive Sheperd’s posting:
TV very much alive for learning professionals

 

 

Also see:

 

MYOB-July2014

 

 

 

 

This new service makes me think of some related graphics:

 

 

MoreChoiceMoreControl-DSC

 

 

 

 

 

 

 

StreamsOfContent-DSC

 

 

 

 

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

 

Addendum on 12/2/14 — from Learning TRENDS by Elliott Masie – December 2, 2014 | #857

Idea – Courses in the Air:
There were representatives from airlines, Aviation Authorities and even Panasonic – which makes the interactive movie and TV systems on long distance airplanes.  So, I rolled out one of my “aha ideas” that I would love to see invented sometime: Courses in the Air.

What if a passenger could choose to take a mini-course on a 4 to 14 hour flight. It would be a MOOC in the Sky – with video, reading and interactive elements – and someday might even include a real time video chat function as well.  The learner could strive to earn a “badge” or roll them up into a certificate or degree program – that they pursued over several years of flights.  It would be an intriguing element to add to international travel.

 

IBM-UK-Watson-Nov2014

 

 

Excerpt from IBM grants UK universities unprecedented access to AI system Watson — from information-age.com by Ben Rossi

 

The University of Southampton and Imperial College London have today announced partnerships with IBM to offer students and staff cognitive computing education with unprecedented access to IBM‘s Watson technology and experts.

Imperial College London will offer new courses to provide students with opportunities for hands-on learning as they work to develop cognitive computing solutions to address business and societal challenges.

The partnership extends cognitive systems activities in Imperial’s Department of Computing as well as in other college departments already involved in related interdisciplinary research.

 

 

Also see:

 

WhatIsWatson-Nov2014
 

From DSC:
I’m thinking out loud again…

What if were were to be able to take the “If This Then That (IFTTT)” concept/capabilities and combine it with sensor-based technologies?  It seems to me that we’re at the very embryonic stages of some very powerful learning scenarios, scenarios that are packed with learning potential, engagement, intrigue, interactivity, and opportunities for participation.

For example, what would happen if you went to one corner of the room, causing an app on your mobile device to launch and bring up a particular video to review?  Then, after the viewing of the video, a brief quiz appears after that to check your understanding of the video’s main points. Then, once you’ve submitted the quiz — and it’s been received by system ABC — this triggers an unexpected learning event for you.

Combining the physical with the digital…

Establishing IFTTT-based learning playlists…

Building learning channels…learning triggers…learning actions…

Setting a schedule of things to do for a set of iBeacons over a period of time (and being able to save that schedule of events for “next time”).

Hmmm…there’s a lot of potential here!

 

 

IfThisThenThat-Combined-With-iBeacons

 

 

IfThisThenThat

 

 

iBeaconsAndEducation-8-10-14

 

 

Now throw augmented reality, wearables, and intelligent tutoring into the equation! Whew!

We need to be watching out for how machine-to-machine (M2M) communications can be leveraged in the classrooms and training programs across the globe.

One last thought here…
How are we changing our curricula to prepare students to leverage the power of the Internet of Things (IoT)?

 

EDUCAUSE 2014: What IBM’s Watson could bring to higher education — from edtechmagazine.com by D. Frank Smith
Cognitive computing-powered tutors could spark a new age of discovery for students.

Excerpt:

IBM’s Watson, a cognitive computing system that simulates the human thought process, could soon be peering over teacher’s shoulders in classrooms, the company said at EDUCAUSE 2014 on Wednesday.

Several of IBM’s top education leaders hosted a panel at the conference laying out Watson’s trajectory in higher education. The cognitive computer’s ability to digest large data sets and communicate with humans could open new avenues for teaching, said Michael D. King, vice president, IBM Global Education Industry.

“I think the real impact on learning will start to come in the classroom, if you can imagine intelligent tutors — a system that can truly be interactive with the learner as they’re engaging and learning the materials,” King said.

 

Educause2014-Christensen-Online-Disruption

 

Excerpt:

Higher education institutions are poised for a massive shake-up, not unlike what tech companies experienced in the 1980s during the rise of the PC, said EDUCAUSE’s first general session speaker.

“Disruption is always a great opportunity before it becomes a threat,” he said.

“In the future, I don’t think universities themselves will be nearly as prominent as they have been in the past,” he said.

 

 

Also see:

 

 

 

 

Beacons at the museum: Pacific Science Center to roll out location-based Mixby app next month — from geekwire.com by Todd Bishop

Excerpt:

Seattle’s Pacific Science Center has scheduled an Oct. 4 public launch for a new system that uses Bluetooth-enabled beacons and the Mixby smartphone app to offer new experiences to museum guests — presenting them with different features and content depending on where they’re standing at any given moment.

 

Also see:

 

From DSC:
The use of location-based apps & associated technologies (machine-to-machine (M2M) communications) should be part of all ed tech planning from here on out — and also applicable to the corporate world and training programs therein. 

Not only applicable to museums, but also to art galleries, classrooms, learning spaces, campus tours, and more.  Such apps could be used on plant floors in training-related programs as well.

Now mix augmented reality in with location-based technology.  Come up to a piece of artwork, and a variety of apps could be launched to really bring that piece to life! Some serious engagement.

Digital storytelling. The connection of the physical world with the digital world. Digital learning. Physical learning. A new form of blended/hybrid learning.  Active learning. Participation.

 

 

 

Addendum on 9/4/14 — also see:

Aerohive Networks Delivers World’s First iBeacon™ and AltBeacon™ – Enabled Enterprise Wi-Fi Access Points
New Partnership with Radius Networks Delivers IoT Solution to Provide Advanced Insights and Mobile Experience Personalization

Excerpt (emphasis DSC):

SUNNYVALE, Calif.–(BUSINESS WIRE)–Aerohive Networks® (NYSE:HIVE), a leader in controller-less Wi-Fi and cloud-managed mobile networking for the enterprise market today announced that it is partnering with Radius Networks, a market leader in proximity services and proximity beacons with iBeacon™ and AltBeacon™ technology, to offer retailers, educators and healthcare providers a cloud-managed Wi-Fi infrastructure enabled with proximity beacons. Together, Aerohive and Radius Networks provide complementary cloud platforms for helping these organizations meet the demands of today’s increasingly connected customers who are seeking more personalized student education, patient care and shopper experiences.

 

Also:

 

 

 

WatsonInBoardRoomMeetingsMIT-Aug2014

 

Excerpt:

First, Watson was brought up to speed by being directed, verbally, to read over an internal memo summarizing the company’s strategy for artificial intelligence. It was then asked by one of the researchers to use that knowledge to generate a long list of candidate companies. “Watson, show me companies between $15 million and $60 million in revenue relevant to that strategy,” he said.

After the humans in the room talked over the results Watson displayed on screen, they called out a shorter list for Watson to put in a table with columns for key characteristics. After mulling some more, one of them said: “Watson, make a suggestion.” The system ran a set of decision-making algorithms and bluntly delivered its verdict: “I recommend eliminating Kawasaki Robotics.” When Watson was asked to explain, it simply added. “It is inferior to Cognilytics in every way.”

 

Reflections on “C-Suite TV debuts, offers advice for the boardroom” [Dreier]

C-Suite TV debuts, offers advice for the boardroom — from streamingmedia.com by Troy Dreier
Business leaders now have an on-demand video network to call their own, thanks to one Bloomberg host’s online venture.

Excerpt:

Bringing some business acumen to the world of online video, C-Suite TV is launching today. Created by Bloomberg TV host and author Jeffrey Hayzlett, the on-demand video network offers interviews with and shows about business execs. It promises inside information on business trends and the discussions taking place in the biggest boardrooms.

 

MYOB-July2014

 

The Future of TV is here for the C-Suite — from hayzlett.com by Jeffrey Hayzlett

Excerpt:

Rather than wait for networks or try and gain traction through the thousands of cat videos, we went out and built our own network.

 

 

See also:

  • Mind your own business
    From the About page:
    C-Suite TV is a web-based digital on-demand business channel featuring interviews and shows with business executives, thought leaders, authors and celebrities providing news and information for business leaders. C-Suite TV is your go-to resource to find out the inside track on trends and discussions taking place in businesses today. This online channel will be home to such shows as C-Suite with Jeffrey Hayzlett, MYOB – Mind Your Own Business and Bestseller TV with more shows to come.

 

 

From DSC:
The above items took me back to the concept of Learning from the Living [Class] Room.

Many of the following bullet points are already happening — but what I’m trying to influence/suggest is to bring all of them together in a powerful, global, 24 x 7 x 365, learning ecosystem:

  • When our “TVs” become more interactive…
  • When our mobile devices act as second screens and when second screen-based apps are numerous…
  • When discussion boards, forums, social media, assignments, assessments, and videoconferencing capabilities are embedded into our Smart/Connected TVs and are also available via our mobile devices…
  • When education is available 24 x 7 x 365…
  • When even the C-Suite taps into such platforms…
  • When education and entertainment are co-mingled…
  • When team-based educational content creation and delivery are mainstream…
  • When self-selecting Communities of Practice thrive online…
  • When Learning Hubs combine the best of both worlds (online and face-to-face)…
  • When Artificial Intelligence, powerful cognitive computing capabilities (i.e., IBM’s Watson), and robust reporting mechanisms are integrated into the backends…
  • When lifelong learners have their own cloud-based profiles…
  • When learners can use their “TVs” to tap into interactive, multimedia-based streams of content of their choice…
  • When recommendation engines are offered not just at Netflix but also at educationally-oriented sites…
  • When online tutoring and intelligent tutoring really take off…

…then I’d say we’ll have a powerful, engaging, responsive, global education platform.

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

Seven of the nation’s leading technology institutions unveil cognitive computing courses leveraging IBM Watson — from IBM.com
In Fall, 2014, new courses will inspire university students to build apps infused with Watson’s intelligence while gaining the entrepreneurial vision to deliver their innovations into the marketplace. Announcement marks the newest step in IBM’s strategy to fuel an ecosystem of innovators who will make cognitive computing the new worldwide standard of computing.

Excerpt:

ARMONK, N.Y. – 07 May 2014: IBM (NYSE: IBM) is partnering with the country’s leading technology universities to launch cognitive computing courses that give students unprecedented access via the cloud to one of the Company’s most prized innovations: Watson.

For the first time, enrollment is now open for fall 2014 cognitive computing courses at Carnegie Mellon University, New York University (NYU), The Ohio State University, Rensselaer Polytechnic Institute (RPI), University of California, Berkeley, University of Michigan and the University of Texas in Austin.

Co-designed by the Watson Group and leading academic experts in fields such as Artificial Intelligence and Computer Science, the courses will empower students with the technical knowledge and hands-on learning required to develop new cognitive computing applications fueled by Watson’s intelligence.

 

IBM partners with universities on Watson projects — from abcnews.go.com by Bree Fowler

Excerpt:

Watson is going to college.

Students at seven of the country’s top computer science universities will get a chance to try out IBM’s famous cognitive computing system as part of new classes set for next fall.

The partnership between Armonk, New York-based IBM and the universities, which was set to be announced Wednesday, will let students use the “Jeopardy!” champion to develop new cognitive computing applications for a variety of industries ranging from health care to finance.

“If they’re interested in these kinds of technologies, when they graduate they’re going to have a natural proclivity to designing them,” says Michael Rhodin, IBM’s senior vice president overseeing Watson.

“The logic here is that the next generation of entrepreneurs is in universities today.”

 
© 2024 | Daniel Christian