What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55@gmail.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation (using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 

Oculus Education Pilot Kicks Off in 90 California Libraries — from oculus.com

Excerpt:

Books, like VR, open the door to new possibilities and let us experience worlds that would otherwise be beyond reach. Today, we’re excited to bring the two together through a new partnership with the California State Library. This pilot program will place 100 Rifts and Oculus Ready PCs in 90 libraries throughout the state, letting even more people step inside VR and see themselves as part of the revolution.

“It’s pretty cool to imagine how many people will try VR for the very first time—and have that ‘wow’ moment—in their local libraries,” says Oculus Education Program Manager Cindy Ball. “We hope early access will cause many people to feel excited and empowered to move beyond just experiencing VR and open their minds to the possibility of one day joining the industry.”

 

 

Also see:

Oculus Brings Rift to 90 Libraries in California for Public Access VR — from roadtovr.com by Dominic Brennan

Excerpt:

Oculus has announced a pilot program to place 100 Rifts and Oculus Ready PCs in 90 libraries throughout the state of California, from the Oregon border down to Mexico. Detailed on the Oculus Blog, the new partnership with the California State Library hopes to highlight the educational potential of VR, as well as provide easy access to VR hardware within the heart of local communities.

“Public libraries provide safe, supportive environments that are available and welcoming to everyone,” says Oculus Education Program Manager Cindy Ball. “They help level the playing field by providing educational opportunities and access to technology that may not be readily available in the community households. Libraries share the love—at scale.”

 

 

 

Australian start-up taps IBM Watson to launch language translation earpiece — from prnewswire.com
World’s first available independent translation earpiece, powered by AI to be in the hands of consumers by July

Excerpts:

SYDNEY, June 12, 2017 /PRNewswire/ — Lingmo International, an Australian technology start-up, has today launched Translate One2One, an earpiece powered by IBM Watson that can efficiently translate spoken conversations within seconds, being the first of its kind to hit global markets next month.

Unveiled at last week’s United Nations Artificial Intelligence (AI) for Good Summit in Geneva, Switzerland, the Translate One2One earpiece supports translations across English, Japanese, French, Italian, Spanish, Brazilian Portuguese, German and Chinese. Available to purchase today for delivery in July, the earpiece carries a price tag of $179 USD, and is the first independent translation device that doesn’t rely on Bluetooth or Wi-Fi connectivity.

 

Lingmo International, an Australian technology start-up, has today launched Translate One2One, an earpiece powered by IBM Watson that can efficiently translate spoken conversations within seconds.

 

 

From DSC:
How much longer before this sort of technology gets integrated into videoconferencing and transcription tools that are used in online-based courses — enabling global learning at a scale never seen before? (Or perhaps NLP-based tools are already being integrated into global MOOCs and the like…not sure.) It would surely allow for us to learn from each other in a variety of societies throughout the globe.

 

 

 

The Classroom of Tomorrow: A Panel Discussion — sponsored by Kaltura

Description:
Technology is changing the way we approach education, rapidly. But what will tomorrow’s classroom actually look like? We’ve invited some leading experts for a spirited debate about what the future holds for educational institutions. From personalization to predictive analytics to portable digital identities, we’ll explore the biggest changes coming. We’ll see how new technologies might interact with changing demographics, business models, drop out rates, and more.

Panelists:

  • David Nirenberg – Dean of the Division of the Social Sciences, University of Chicago
  • Rick Kamal – Chief Technology Officer, Harvard Business School, HBX
  • Gordon Freedman – President, National Laboratory for Education Transformation
  • Michael Markowitz – Entrepreneur and Investor, Education
  • Dr Michal Tsur – Co-founder and President, Kaltura

 

Also see:

  • Roadmap to the Future — by Dr Michal Tsur – Co-founder and President, Kaltura
    What are some of the leading trends emerging from the educational technology space? Michal Tsur takes you on a quick tour of big trends you should be aware of. Then, get a glimpse of Kaltura’s own roadmap for lecture capture and more.

 

 

Regarding the above items, some thoughts from DSC:
Kaltura did a nice job of placing the focus on a discussion about the future of the classroom as well as on some trends to be aware of, and not necessarily on their own company (this was especially the case in regards to the panel discussion). They did mention some things about their newest effort, Kaltura Lecture Capture, but this was kept to a very reasonable amount.

 

 

From DSC:
In reviewing the item below, I wondered:

How should students — as well as Career Services Groups/Departments within institutions of higher education — respond to the growing use of artificial intelligence (AI) in peoples’ job searches?

My take on it? Each student needs to have a solid online-based footprint — such as offering one’s own streams of content via a WordPress-based blog, one’s Twitter account, and one’s LinkedIn account. That is, each student has to be out there digitally, not just physically. (Though I suspect having face-to-face conversations and interactions will always be an incredibly powerful means of obtaining jobs as well. But if this trend picks up steam, one’s online-based footprint becomes all the more important to finding work.)

 




How AI is changing your job hunt
 — from by Jennifer Alsever

Excerpt (emphasis DSC):

The solution appeared in the form of artificial intelligence software from a young company called Interviewed. It speeds the vetting process by providing online simulations of what applicants might do on their first day as an employee. The software does much more than grade multiple-choice questions. It can capture not only so-called book knowledge but also more intangible human qualities. It uses natural-language processing and machine learning to construct a psychological profile that predicts whether a person will fit a company’s culture. That includes assessing which words he or she favors—a penchant for using “please” and “thank you,” for example, shows empathy and a possible disposition for working with customers—and measuring how well the applicant can juggle conversations and still pay attention to detail. “We can look at 4,000 candidates and within a few days whittle it down to the top 2% to 3%,” claims Freedman, whose company now employs 45 people. “Forty-eight hours later, we’ve hired someone.” It’s not perfect, he says, but it’s faster and better than the human way.

It isn’t just startups using such software; corporate behemoths are implementing it too. Artificial intelligence has come to hiring.

Predictive algorithms and machine learning are fast emerging as tools to identify the best candidates.

 

 



Addendum on 6/7/17:

 

 

 



Addendum on 6/15/17:

  • Want a job? It may be time to have a chat with a bot — from sfchronicle.com by Nicholas Cheng
    Excerpt:
    “The future is AI-based recruitment,” Mya CEO Eyal Grayevsky said. Candidates who were being interviewed through a chat couldn’t tell that they were talking to a bot, he added — even though the company isn’t trying to pass its bot off as human.

    A 2015 study by the National Bureau of Economic Research surveyed 300,000 people and found that those who were hired by a machine, using algorithms to match them to a job, stayed in their jobs 15 percent longer than those who were hired by human recruiters.

    A report by the McKinsey Global Institute estimates that more than half of human resources jobs may be lost to automation, though it did not give a time period for that shift.

    “Recruiting jobs will definitely go away,” said John Sullivan, who teaches management at San Francisco State University.

 

 

Adobe Scan can turn any document into an editable PDF (and it’s free!) — from interestingengineering.com

Excerpts:

Adobe is launching Adobe Scan, a brand new mobile application that makes converting paper documents to editable PDF files fast and simple.

To develop the app Adobe invested heavily in the company’s machine learning and AI platform, Sensai.

 

 

 

From Apple itself:

 

  • HomePod reinvents music in the home
    San Jose, California — Apple today announced HomePod, a breakthrough wireless speaker for the home that delivers amazing audio quality and uses spatial awareness to sense its location in a room and automatically adjust the audio. Designed to work with an Apple Music subscription for access to over 40 million songs, HomePod provides deep knowledge of personal music preferences and tastes and helps users discover new music.

    As a home assistant, HomePod is a great way to send messages, get updates on news, sports and weather, or control smart home devices by simply asking Siri to turn on the lights, close the shades or activate a scene. When away from home, HomePod is the perfect home hub, providing remote access and home automations through the Home app on iPhone or iPad.

 

 

 

 



Also see:



 

The 8 biggest announcements from Apple WWDC 2017 — from theverge.copm by Natt Garun

Excerpt:

Apple introduced a new ARKit to let developers build augmented reality apps for the iPhone. The kit can help find planes, track motion, and estimate scale and ambient lighting. Popular apps like Pokémon Go will also use ARKit for improved real-time renders.

Rather than requiring external hardware like Microsoft’s HoloLens, Apple seems to be betting on ARKit to provide impressive quality imaging through a device most people already own. We’ll know more on how the quality actually compares when we get to try it out ourselves.

 

 

Everything Apple Announced Today at WWDC — from wired.com by Arielle Pardes

Excerpt:

On Monday, over 5,000 developers packed the San Jose Convention Center to listen to Tim Cook and other Apple execs share the latest innovations out of Cupertino. Over the course of two and a half hours, the company unveiled its most powerful Mac yet, a long-awaited Siri speaker, and tons of new software upgrades across all of the Apple platforms, from your iPhone to your Apple Watch. Missed the keynote speech? Here’s a recap of the nine biggest announcements from WWDC 2017.

 

 

Apple is launching an iOS ‘ARKit’ for augmented reality apps — from theverge.com by Adi Robertson

Excerpt:

Apple has announced a tool it calls ARKit, which will provide advanced augmented reality capabilities on iOS. It’s supposed to allow for “fast and stable motion tracking” that makes objects look like they’re actually being placed in real space, instead of simply hovering over it.

 

 

Apple is finally bringing virtual reality to the Mac – from businessinsider.com by Matt Weinberger

Excerpt:

Apple is finally bringing virtual reality support to its Mac laptops and desktops, bringing the company up to speed with what many see as the next phase of computing.

At Monday’s Apple WWDC event in San Jose, the company announced that with this fall’s MacOS High Sierra update, the Mac will support external graphics hardware — meaning you can plug in a box and greatly increase your machine’s graphical capabilities.

In turn, that external hardware will give the Mac the boost it needs to support virtual reality headsets, which require superior performance to create an immersive experience.

 

 

2017 Internet Trends Report — from kpcb.com by Mary Meeker

 

 

Mary Meeker’s 2017 internet trends report: All the slides, plus analysis — from recode.net by Rani Molla
The most anticipated slide deck of the year is here.

Excerpt:

Here are some of our takeaways:

  • Global smartphone growth is slowing: Smartphone shipments grew 3 percent year over year last year, versus 10 percent the year before. This is in addition to continued slowing internet growth, which Meeker discussed last year.
  • Voice is beginning to replace typing in online queries. Twenty percent of mobile queries were made via voice in 2016, while accuracy is now about 95 percent.
  • In 10 years, Netflix went from 0 to more than 30 percent of home entertainment revenue in the U.S. This is happening while TV viewership continues to decline.
  • China remains a fascinating market, with huge growth in mobile services and payments and services like on-demand bike sharing. (More here: The highlights of Meeker’s China slides.)

 

 

Read Mary Meeker’s essential 2017 Internet Trends report — from techcrunch.com by Josh Constine

Excerpt:

This is the best way to get up to speed on everything going on in tech. Kleiner Perkins venture partner Mary Meeker’s annual Internet Trends report is essentially the state of the union for the technology industry. The widely anticipated slide deck compiles the most informative research on what’s getting funded, how Internet adoption is progressing, which interfaces are resonating, and what will be big next.

You can check out the 2017 report embedded below, and here’s last year’s report for reference.

 

 

The Slickest Things Google Debuted [on 5/17/17] at Its Big Event — from wired.com by Arielle Pardes

Excerpt (emphasis DSC):

At this year’s Google I/O, the company’s annual developer conference and showcase, CEO Sundar Pichai made one thing very clear: Google is moving toward an AI-first approach in its products, which means pretty soon, everything you do on Google will be powered by machine learning. During Wednesday’s keynote speech, we saw that approach seep into all of Google’s platforms, from Android to Gmail to Google Assistant, each of which are getting spruced up with new capabilities thanks to AI. Here’s our list of the coolest things Google announced today.

 

 

Google Lens Turns Your Camera Into a Search Box — from wired.com by David Pierce

Excerpt:

Google is remaking itself as an AI company, a virtual assistant company, a classroom-tools company, a VR company, and a gadget maker, but it’s still primarily a search company. And [on 5/17/17] at Google I/O, its annual gathering of developers, CEO Sundar Pichai announced a new product called Google Lens that amounts to an entirely new way of searching the internet: through your camera.

Lens is essentially image search in reverse: you take a picture, Google figures out what’s in it. This AI-powered computer vision has been around for some time, but Lens takes it much further. If you take a photo of a restaurant, Lens can do more than just say “it’s a restaurant,” which you know, or “it’s called Golden Corral,” which you also know. It can automatically find you the hours, or call up the menu, or see if there’s a table open tonight. If you take a picture of a flower, rather than getting unneeded confirmation of its flower-ness, you’ll learn that it’s an Elatior Begonia, and that it really needs indirect, bright light to survive. It’s a full-fledged search engine, starting with your camera instead of a text box.

 

 

Google’s AI Chief On Teaching Computers To Learn–And The Challenges Ahead — from fastcompany.com by Harry McCracken
When it comes to AI technologies such as machine learning, Google’s aspirations are too big for it to accomplish them all itself.

Excerpt:

“Last year, we talked about becoming an AI-first company and people weren’t entirely sure what we meant,” he told me. With this year’s announcements, it’s not only understandable but tangible.

“We see our job as evangelizing this new shift in computing,” Giannandrea says.


Matching people with jobs
Pichai concluded the I/O keynote by previewing Google for Jobs, an upcoming career search engine that uses machine learning to understand job listings–a new approach that is valuable, Giannandrea says, even though looking for a job has been a largely digital activity for years. “They don’t do a very good job of classifying the jobs,” Giannandrea says. “It’s not just that I’m looking for part-time work within five miles of my house–I’m looking for an accounting job that involves bookkeeping.”

 

 

Google Assistant Comes to Your iPhone to Take on Siri — from wired.com by David Pierce

 

 

Google rattles the tech world with a new AI chip for all — from wired.com by Cade Metz

 

 

I/O 2017 Recap — from Google.com

 

 

The most important announcements from Google I/O 2017! — from androidcentral.com by Alex Dobie

 

 

Google IO 2017: All the announcements in one place! — from androidauthority.com by Kris Carlon

 

 

 

 

A question/reflection from DSC:


Will #MOOCs provide the necessary data for #AI-based intelligent agents/algorithms? Reminds me of Socratic.org:


 

 


Somewhat related:

 

From DSC:
There are now more than 12,000+ skills on Amazon’s new platform — Alexa.  I continue to wonder…what will this new platform mean/deliver to societies throughout the globe?


 

From this Alexa Skills Kit page:

What Is an Alexa Skill?
Alexa is Amazon’s voice service and the brain behind millions of devices including Amazon Echo. Alexa provides capabilities, or skills, that enable customers to create a more personalized experience. There are now more than 12,000 skills from companies like Starbucks, Uber, and Capital One as well as innovative designers and developers.

What Is the Alexa Skills Kit?
With the Alexa Skills Kit (ASK), designers, developers, and brands can build engaging skills and reach millions of customers. ASK is a collection of self-service APIs, tools, documentation, and code samples that makes it fast and easy for you to add skills to Alexa. With ASK, you can leverage Amazon’s knowledge and pioneering work in the field of voice design.

You can build and host most skills for free using Amazon Web Services (AWS).

 

 

 


 

 

EON CREATOR AVR

The EON Creator AVR Enterprise and Education content builder empowers non-technical users to create compelling AR and VR applications in minutes, not weeks.

ENTERPRISE
With no programming required, EON Creator AVR Enterprise empowers workers to accelerate learning and improve performance, safety, and efficiency in the workplace.

EDUCATION
Teachers and students can create, experience, and share AVR learning applications with EON Creator AVR and quickly add them to their current classroom, seamlessly.

 

 

 

 

Also see:

 

 

 

Also see:

 

 

 

Veeery interesting. Alexa now adds visuals / a screen! With the addition of 100 skills a day, where might this new platform lead?

Amazon introduces Echo Show

The description reads:

  • Echo Show brings you everything you love about Alexa, and now she can show you things. Watch video flash briefings and YouTube, see music lyrics, security cameras, photos, weather forecasts, to-do and shopping lists, and more. All hands-free—just ask.
  • Introducing a new way to be together. Make hands-free video calls to friends and family who have an Echo Show or the Alexa App, and make voice calls to anyone who has an Echo or Echo Dot.
  • See lyrics on-screen with Amazon Music. Just ask to play a song, artist or genre, and stream over Wi-Fi. Also, stream music on Pandora, Spotify, TuneIn, iHeartRadio, and more.
  • Powerful, room-filling speakers with Dolby processing for crisp vocals and extended bass response
  • Ask Alexa to show you the front door or monitor the baby’s room with compatible cameras from Ring and Arlo. Turn on lights, control thermostats and more with WeMo, Philips Hue, ecobee, and other compatible smart home devices.
  • With eight microphones, beam-forming technology, and noise cancellation, Echo Show hears you from any direction—even while music is playing
  • Always getting smarter and adding new features, plus thousands of skills like Uber, Jeopardy!, Allrecipes, CNN, and more

 

 

 

 

 

 



From DSC:

Now we’re seeing a major competition between the heavy-hitters to own one’s living room, kitchen, and more. Voice controlled artificial intelligence. But now, add the ability to show videos, text, graphics, and more. Play music. Control the lights and the thermostat. Communicate with others via hands-free video calls.

Hmmm….very interesting times indeed.

 

 

Developers and corporates released 4,000 new skills for the voice assistant in just the last quarter. (source)

 

…with the company adding about 100 skills per day. (source)

 

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 



 

Addendum on 5/10/17:

 



 

 

The 2017 Dean’s List: EdTech’s 50 Must-Read Higher Ed Blogs [Meghan Bogardus Cortez at edtechmagazine.com]

 

The 2017 Dean’s List: EdTech’s 50 Must-Read Higher Ed Blogs — from edtechmagazine.com by Meghan Bogardus Cortez
These administrative all-stars, IT gurus, teachers and community experts understand how the latest technology is changing the nature of education.

Excerpt:

With summer break almost here, we’ve got an idea for how you can use some of your spare time. Take a look at the Dean’s List, our compilation of the must-read blogs that seek to make sense of higher education in today’s digital world.

Follow these education trailblazers for not-to-be-missed analyses of the trends, challenges and opportunities that technology can provide.

If you’d like to check out the Must-Read IT blogs from previous years, view our lists from 2016, 2015, 2014 and 2013.

 

 



From DSC:
I would like to thank Tara Buck, Meghan Bogardus Cortez, D. Frank Smith, Meg Conlan, and Jimmy Daly and the rest of the staff at EdTech Magazine for their support of this Learning Ecosystems blog through the years — I really appreciate it. 

Thanks all for your encouragement through the years!



 

 

 

 

From DSC and Adobe — for faculty members and teachers out there:

Do your students an enormous favor by assigning them a digital communications project. Such a project could include images, infographics, illustrations, animations, videos, websites, blogs (with RSS feeds), podcasts, videocasts, mobile apps and more. Such outlets offer powerful means of communicating and demonstrating knowledge of a particular topic.

As Adobe mentions, when you teach your students how to create these types of media projects, you prepare them to be flexible and effective digital communicators.  I would also add that these new forms and tools can be highly engaging, while at the same time, they can foster students’ creativity. Building new media literacy skills will pay off big time for your students. It will land them jobs. It will help them communicate to a global audience. Students can build upon these skills to powerfully communicate numerous kinds of messages in the future. They can be their own radio station. They can be their own TV station.

For more information, see this page out at Adobe.com.

 

 

From DSC:
This is where we may need more team-based approaches…because one person may not be able to create and grade/assess such assignments.

 

 
© 2024 | Daniel Christian