From DSC:
I know Quentin Schultze from our years working together at Calvin College, in Grand Rapids, Michigan (USA). I have come to greatly appreciate Quin as a person of faith, as an innovative/entrepreneurial professor, as a mentor to his former students, and as an excellent communicator. 

Quin has written a very concise, wisdom-packed book that I would like to recommend to those people who are seeking to be better communicators, leaders, and servants. But I would especially like to recommend this book to the leadership at Google, Amazon, Apple, Microsoft, IBM, Facebook, Nvidia, the major companies developing robots, and other high-tech companies. Why do I list these organizations? Because given the exponential pace of technological change, these organizations — and their leaders — have an enormous responsibility to make sure that the technologies that they are developing result in positive changes for societies throughout the globe. They need wisdom, especially as they are working on emerging technologies such as Artificial Intelligence (AI), personal assistants and bots, algorithms, robotics, the Internet of Things, big data, blockchain and more. These technologies continue to exert an increasingly powerful influence on numerous societies throughout the globe today. And we haven’t seen anything yet! Just because we can develop and implement something, doesn’t mean that we should. Again, we need wisdom here.

But as Quin states, it’s not just about knowledge, the mind and our thoughts. It’s about our hearts as well. That is, we need leaders who care about others, who can listen well to others, who can serve others well while avoiding gimmicks, embracing diversity, building trust, fostering compromise and developing/exhibiting many of the other qualities that Quin writes about in his book. Our societies desperately need leaders who care about others and who seek to serve others well.

I highly recommend you pick up a copy of Quin’s book. There are few people who can communicate as much in as few words as Quin can. In fact, I wish that more writing on the web and more articles/research coming out of academia would be as concisely and powerfully written as Quin’s book, Communicate Like a True Leader: 30 Days of Life-Changing Wisdom.

 

 

To lead is to accept responsibility and act responsibly.
Quentin Schultze

 

 

 

Oculus Announces $199 Standalone VR Headset — from vrscout.com by Jonathan Nafarrete

Excerpt:

‘Oculus Go’ doesn’t require a phone or PC.

Oculus’ biggest event of the year, Oculus Connect 4, kicked off Wednesday morning with an opening keynote reveal from Facebook’s Mark Zuckerberg.

The Facebook owned Oculus unveiled on stage their first standalone VR headset. Dubbed Oculus Go, the VR headset is a all-in one mobile computer, which means you don’t need to slide in your phone or plug it into a beefy gaming PC. There’s also no cords. The best part of it all, Oculus Go is priced at $199 and will be available in 2018.

 

 

Also see:

 

 

 

 

Excerpt:

The Top 200 Tools for Learning 2017 (11th Annual Survey) has been compiled by Jane Hart of the Centre for Learning & Performance Technologies from the votes of 2,174 learning professionals worldwide, together with 3 sub-lists

  • Top 100 Tools for Personal & Professional Learning (PPL)
  • Top 100 Tools for Workplace Learning (WPL)
  • Top 100 Tools for Education (EDU)

 

Excerpt from the Analysis page (emphasis DSC):

Here is a brief analysis of what’s on the list and what it tells us about the current state of personal learning, workplace learning and education.

Some facts

Some observations on what the Top Tools list tells us personal and professional learning
As in previous years, individuals continue to using a wide variety of:

  • networks, services and platforms for professional networking, communication and collaboration
  • web resources and courses for self-improvement and self-development
  • tools for personal productivity

All of which shows that many individuals have become highly independent, continuous modern professional learners – making their own decisions about what they need to learn and how to do it.

 

 

 

 

Google’s jobs AI service hits private beta, now works in 100 languages — from venturebeat.com by Blair Hanley Frank

Excerpt:

Google today announced the beta release of its Cloud Job Discovery service, which uses artificial intelligence to help customers connect job vacancies with the people who can fill them.

Formerly known as the Cloud Jobs API, the system is designed to take information about open positions and help job seekers take better advantage of it. For example, Cloud Job Discovery can take a plain language query and help translate that to the specific jargon employers use to describe their positions, something that can be hard for potential employees to navigate.

As part of this beta release, Google announced that Cloud Job Discovery is now designed to work with applicant-tracking systems and staffing agencies, in addition to job boards and career site providers like CareerBuilder.

It also now works in 100 languages. While the service is still primarily aimed at customers in the U.S., some of Google’s existing clients need support for multiple languages. In the future, the company plans to expand the Cloud Job Discovery service internationally, so investing in language support now makes sense going forward.

 



From DSC:
Now tie this type of job discovery feature into a next generation learning platform, helping people identify which skills they need to get jobs in their local area(s). Provide a list of courses/modules/RSS feeds to get them started. Allow folks to subscribe to constant streams of content and unsubscribe to them at any time as well.

 

 

We MUST move to lifelong, constant learning via means that are highly accessible, available 24×7, and extremely cost effective. Blockchain-based technologies will feed web-based learner profiles, which each of us will determine who can write to our learning profile and who can review it as well.

 

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

 



Addendum on 9/29/17:



  • Facebook partners with ZipRecruiter and more aggregators as it ramps up in jobs — from techcrunch.com by Ingrid Lunden
    Excerpt:
    Facebook has made no secret of its wish to do more in the online recruitment market — encroaching on territory today dominated by LinkedIn, the leader in tapping social networking graphs to boost job-hunting. Today, Facebook is taking the next step in that process.
    Facebook will now integrate with ZipRecruiter — an aggregator that allows those looking to fill jobs to post ads to many traditional job boards, as well as sites like LinkedIn, Google and Twitter — to boost the number of job ads available on its platform targeting its 2 billion monthly active users.
    The move follows Facebook launching its first job ads earlier this year, and later appearing to be interested in augmenting that with more career-focused features, such as a platform to connect people looking for mentors with those looking to offer mentorship.

 

 

 

DeepMind, Vodafone, Google & Facebook – Deep Learning & AI Highlights — from re-work.co by Nikita Johnson

Excerpt:

This week, the The Deep Learning Summit and AI Assistant Summit saw over 450 DL and AI experts and enthusiasts come together to learn from each other and explore the most recent research and progressions in the space. Over the past two days we’ve heard from the likes of Amazon, Facebook, Google, Vodafone, as well as Universities such as Cambridge, Warwick, UCL, Imperial, and exciting new startups like Jukedeck and Echobox. Topics have been incredibly diverse covering NLP, space exploration, ML for music composition, and many more.

We’ve collected some of our favourite takeaways from both tracks over the last two days, as well as hearing what our attendees thought.

What did we hear at the AI Assistant Summit?
I’m driving in France and google translate is automatically translating all the french road signs for me and directing me to my location, telling me my time of arrival – this is the future. There is no interface there is no screen.
Adi Chhabra, Evolution of AI & Machine Learning in Customer Experience – Beyond Interfaces, Vodafone

We are at the beginning of the era of assistance. In the future every employee will have an assistant to help him with decision making.
Christophe Bourguignat, Deep Learning for Conversational Intelligence on Analytics Data, Zelros

 

 

Samsung to develop VR mental health diagnosis tools for hospitals — from by Cho Mu-Hyun
Samsung Electronics will work with Gangnam Severance Hospital and content maker FNI to develop mental health diagnosis tools that use virtual reality.

Excerpt:

Cognitive behaviour therapies for suicide prevention and psychological assessment will be the focus, it said.

The companies will make chairs and diagnosis kits as physical products and will develop an application for use in psychological assessments using artificial intelligence (AI).

 

 

 

Augmented reality 101: Top AR use-cases — from wikitude.com by Camila Kohles

Excerpt:

Before we proceed, let’s make one thing clear: AR is not just about dog face filters and Pokémon GO. We kid you not. People are using this technology to bring ease to their lives and many forward-thinking companies are working with augmented reality to improve their workflow and businesses. Let’s see how.

 

 

 

The Top 9 Augmented Reality Companies in Healthcare — from medicalfuturist.com with thanks to Woontack Woo for the resource

Excerpt:

When Pokemon Go conquered the world, everyone could face the huge potential in augmented reality. Although the hype around the virtual animal hunting settled, AR continues to march triumphantly into more and more industries and fields, including healthcare. Here, I listed the most significant companies bringing augmented reality to medicine and healing.


Brain Power
The Massachusetts-based technology company, established in 2013, has been focusing on the application of pioneering neuroscience with the latest in wearable technology, in particular, Google Glass. The start-up builds brain science-driven software to transform wearables into neuro-assistive devices for the educational challenges of autism. Their aim is to teach life skills to children and adults on the autism spectrum. They developed a unique software suite, the “Empowered Brain” aiming to help children with their social skills, language, and positive behaviors. The software contains powerful data collection and analytic tools allowing for customized feedback for the child.

 

 

How VR Can Ease the Transition for First-Time Wheelchair Users — from vrscout.com by Presley West
Designers at innovation and design company, Fjord, have created a VR experience that teaches brand new wheelchair users how to safely maneuver their environment in a safe, empowering way.

 

 

 

How Eye Tracking is Driving the Next Generation of AR and VR — from vrscout.com by Eric Kuerzel

 

 

WebVR: Taking The Path Of Least Resistance To Mainstream VR — from vrscout.com by Vanessa Radd

Excerpt:

Content and Education
In the midst of a dearth of content for VR, WebVR content creators are coming together to create and collaborate. Over a million creators are sharing their 3D models on Sketchfab’s 3D/VR art community platform. Virtuleap also organized the first global WebVR hackathon.

“For application domains such as education and heritage, developing VR scenes and experiences for the Web is highly important,” said Stone. “[This] promotes accessibility by many beneficiaries without the need (necessarily) for expensive and sophisticated computing or human interface hardware.”

This democratized approach opens up possibilities in education far beyond what we’re seeing today.

“I also think that WebVR, as a JavaScript API, also enables a wide range of future students and young developers to ‘dip their toes into the water’ and start to build portfolios demonstrating their capabilities, ultimately to future employers,” said Stone. “I remember the promise of the days of VRML and products such as SGI’s Cosmo and Cortona3D (which still available today, of course). But the ability to be able to open up interactive—and quite impressive—demos of VR experiences that existed in higher quality form on specialised platforms, became an amazing marketing tool in the late 1990s and 2000s.”

 

 

 

 

 

Codify Academy Taps IBM Cloud with Watson to Design Cognitive Chatbot — from finance.yahoo.com
Chatbot “Bobbot” has driven thousands of potential leads, 10 percent increase in converting visitors to students

Excerpt:

ARMONK, N.Y., Aug. 4, 2017 /PRNewswire/ — IBM (NYSE: IBM) today announced that Codify Academy, a San Francisco-based developer education startup, tapped into IBM Cloud’s cognitive services to create an interactive cognitive chatbot, Bobbot, that is improving student experiences and increasing enrollment.

Using the IBM Watson Conversation Service, Bobbot fields questions from prospective and current students in natural language via the company’s website. Since implementing the chatbot, Codify Academy has engaged thousands of potential leads through live conversation between the bot and site visitors, leading to a 10 percent increase in converting these visitors into students.

 

 

Bobbot can answer more than 200 common questions about enrollment, course and program details, tuition, and prerequisites, in turn enabling Codify Academy staff to focus on deeper, more meaningful exchanges.

 

 

 


Also see:

Chatbots — The Beginners Guide
 — from chatbotsmagazine.com

Excerpt:

If you search for chatbots on Google, you’ll probably come across hundreds of pages starting from what is a chatbot to how to build one. This is because we’re in 2017, the year of the chatbots revolution.

I’ve been introduced to many people who are new to this space, and who are very interested and motivated in entering it, rather they’re software developers, entrepreneurs, or just tech hobbyists. Entering this space for the first time, has become overwhelming in just a few months, particularly after Facebook announced the release of the messenger API at F8 developer conference. Due to this matter, I’ve decided to simplify the basic steps of entering this fascinating world.

 


 

 

 

 

VR Is the Fastest-Growing Skill for Online Freelancers — from bloomberg.com by Isabel Gottlieb
Workers who specialize in artificial intelligence also saw big jumps in demand for their expertise.

Excerpt:

Overall, tech-related skills accounted for nearly two-thirds of Upwork’s list of the 20 fastest-growing skills.

 


 

 


Also see:


How to Prepare Preschoolers for an Automated Economy — from nytimes.com by Claire Miller and Jess Bidgood

Excerpt

MEDFORD, Mass. — Amory Kahan, 7, wanted to know when it would be snack time. Harvey Borisy, 5, complained about a scrape on his elbow. And Declan Lewis, 8, was wondering why the two-wheeled wooden robot he was programming to do the Hokey Pokey wasn’t working. He sighed, “Forward, backward, and it stops.”

Declan tried it again, and this time the robot shook back and forth on the gray rug. “It did it!” he cried. Amanda Sullivan, a camp coordinator and a postdoctoral researcher in early childhood technology, smiled. “They’ve been debugging their Hokey Pokeys,” she said.

The children, at a summer camp last month run by the Developmental Technologies Research Group at Tufts University, were learning typical kid skills: building with blocks, taking turns, persevering through frustration. They were also, researchers say, learning the skills necessary to succeed in an automated economy.

Technological advances have rendered an increasing number of jobs obsolete in the last decade, and researchers say parts of most jobs will eventually be automated. What the labor market will look like when today’s young children are old enough to work is perhaps harder to predict than at any time in recent history. Jobs are likely to be very different, but we don’t know which will still exist, which will be done by machines and which new ones will be created.

 

 

 

How SLAM technology is redrawing augmented reality’s battle lines — from venturebeat.com by Mojtaba Tabatabaie

 

 

Excerpt (emphasis DSC):

In early June, Apple introduced its first attempt to enter AR/VR space with ARKit. What makes ARKit stand out for Apple is a technology called SLAM (Simultaneous Localization And Mapping). Every tech giant — especially Apple, Google, and Facebook — is investing heavily in SLAM technology and whichever takes best advantage of SLAM tech will likely end up on top.

SLAM is a technology used in computer vision technologies which gets the visual data from the physical world in shape of points to make an understanding for the machine. SLAM makes it possible for machines to “have an eye and understand” what’s around them through visual input. What the machine sees with SLAM technology from a simple scene looks like the photo above, for example.

Using these points machines can have an understanding of their surroundings. Using this data also helps AR developers like myself to create much more interactive and realistic experiences. This understanding can be used in different scenarios like robotics, self-driving cars, AI and of course augmented reality.

The simplest form of understanding from this technology is recognizing walls and barriers and also floors. Right now most AR SLAM technologies like ARKit only use floor recognition and position tracking to place AR objects around you, so they don’t actually know what’s going on in your environment to correctly react to it. More advanced SLAM technologies like Google Tango, can create a mesh of our environment so not only the machine can tell you where the floor is, but it can also identify walls and objects in your environment allowing everything around you to be an element to interact with.

 

 

The company with the most complete SLAM database will likely be the winner. This database will allow these giants to have an eye on the world metaphorically, so, for example Facebook can tag and know the location of your photo by just analyzing the image or Google can place ads and virtual billboards around you by analyzing the camera feed from your smart glasses. Your self-driving car can navigate itself with nothing more than visual data.

 

 

 

 

The Business of Artificial Intelligence — from hbr.org by Erik Brynjolfsson & Andrew McAfee

Excerpts (emphasis DSC):

The most important general-purpose technology of our era is artificial intelligence, particularly machine learning (ML) — that is, the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it’s given. Within just the past few years machine learning has become far more effective and widely available. We can now build systems that learn how to perform tasks on their own.

Why is this such a big deal? Two reasons. First, we humans know more than we can tell: We can’t explain exactly how we’re able to do a lot of things — from recognizing a face to making a smart move in the ancient Asian strategy game of Go. Prior to ML, this inability to articulate our own knowledge meant that we couldn’t automate many tasks. Now we can.

Second, ML systems are often excellent learners. They can achieve superhuman performance in a wide range of activities, including detecting fraud and diagnosing disease. Excellent digital learners are being deployed across the economy, and their impact will be profound.

In the sphere of business, AI is poised have a transformational impact, on the scale of earlier general-purpose technologies. Although it is already in use in thousands of companies around the world, most big opportunities have not yet been tapped. The effects of AI will be magnified in the coming decade, as manufacturing, retailing, transportation, finance, health care, law, advertising, insurance, entertainment, education, and virtually every other industry transform their core processes and business models to take advantage of machine learning. The bottleneck now is in management, implementation, and business imagination.

The machine learns from examples, rather than being explicitly programmed for a particular outcome.

 

Let’s start by exploring what AI is already doing and how quickly it is improving. The biggest advances have been in two broad areas: perception and cognition. …For instance, Aptonomy and Sanbot, makers respectively of drones and robots, are using improved vision systems to automate much of the work of security guards. 

 

 

Machine learning is driving changes at three levels: tasks and occupations, business processes, and business models. 

 

 

You may have noticed that Facebook and other apps now recognize many of your friends’ faces in posted photos and prompt you to tag them with their names.

 

 

 

More Than Just Cool? — from insidehighered.com by Nick Roll
Virtual and augmented realities make headway in courses on health care, art history and social work.

Excerpt:

When Glenn Gunhouse visits the Pantheon, you would think that the professor, who teaches art and architecture history, wouldn’t be able to keep his eyes off the Roman temple’s columns, statues or dome. But there’s something else that always catches his eye: the jaws of the tourists visiting the building, and the way they all inevitably drop.

“Wow.”

There’s only one other way that Gunhouse has been able to replicate that feeling of awe for his students short of booking expensive plane tickets to Italy. Photos, videos and even three-dimensional walk-throughs on a computer screen don’t do it: It’s when his students put on virtual reality headsets loaded with images of the Pantheon.

 

…nursing schools are using virtual reality or augmented reality to bring three-dimensional anatomy illustrations off of two-dimensional textbook pages.

 

 

 



 

Also see:

Oculus reportedly planning $200 standalone wireless VR headset for 2018 — from techcrunch.com by Darrell Etherington

Excerpt:

Facebook is set to reveal a standalone Oculus virtual reality headset sometime later this year, Bloomberg reports, with a ship date of sometime in 2018. The headset will work without requiring a tethered PC or smartphone, according to the report, and will be branded with the Oculus name around the world, except in China, where it’ll carry Xiaomi trade dress and run some Xiaomi software as part of a partnership that extends to manufacturing plans for the device.

 



Facebook Inc. is taking another stab at turning its Oculus Rift virtual reality headset into a mass-market phenomenon. Later this year, the company plans to unveil a cheaper, wireless device that the company is betting will popularize VR the way Apple did the smartphone.

Source



 

 

 

Winner takes all — from by Michael Moe, Luben Pampoulov, Li Jiang, Nick Franco, & Suzee Han

 

We did a lot of things that seemed crazy at the time. Many of those crazy things now have over a billion users, like Google Maps, YouTube, Chrome, and Android.

— Larry Page, CEO, Alphabet

 

 

Excerpt:

An alphabet is a collection of letters that represent language. Alphabet, accordingly, is a collection of companies that represent the many bets Larry Page is making to ensure his platform is built to not only survive, but to thrive in a future defined by accelerating digital disruption. It’s an “Alpha” bet on a diversified platform of assets.

If you look closely, the world’s top technology companies are making similar bets.

 


 

 

Technology in general and the Internet in particular is all about a disproportionate gains to the leader in a category. Accordingly, as technology leaders like Facebook, Alphabet, and Amazon survey the competitive landscape, they have increasingly aimed to develop and acquire emerging technology capabilities across a broad range of complementary categories.

 

 

 

What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55@gmail.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation (using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 

An Artificial Intelligence Developed Its Own Non-Human Language — from theatlantic.com by Adrienne LaFrance
When Facebook designed chatbots to negotiate with one another, the bots made up their own way of communicating.

Excerpt:

In the report, researchers at the Facebook Artificial Intelligence Research lab describe using machine learning to train their “dialog agents” to negotiate. (And it turns out bots are actually quite good at dealmaking.) At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.” They had to use what’s called a fixed supervised model instead.

In other words, the model that allowed two bots to have a conversation—and use machine learning to constantly iterate strategies for that conversation along the way—led to those bots communicating in their own non-human language. If this doesn’t fill you with a sense of wonder and awe about the future of machines and humanity then, I don’t know, go watch Blade Runner or something.

 

 

 

Oculus Education Pilot Kicks Off in 90 California Libraries — from oculus.com

Excerpt:

Books, like VR, open the door to new possibilities and let us experience worlds that would otherwise be beyond reach. Today, we’re excited to bring the two together through a new partnership with the California State Library. This pilot program will place 100 Rifts and Oculus Ready PCs in 90 libraries throughout the state, letting even more people step inside VR and see themselves as part of the revolution.

“It’s pretty cool to imagine how many people will try VR for the very first time—and have that ‘wow’ moment—in their local libraries,” says Oculus Education Program Manager Cindy Ball. “We hope early access will cause many people to feel excited and empowered to move beyond just experiencing VR and open their minds to the possibility of one day joining the industry.”

 

 

Also see:

Oculus Brings Rift to 90 Libraries in California for Public Access VR — from roadtovr.com by Dominic Brennan

Excerpt:

Oculus has announced a pilot program to place 100 Rifts and Oculus Ready PCs in 90 libraries throughout the state of California, from the Oregon border down to Mexico. Detailed on the Oculus Blog, the new partnership with the California State Library hopes to highlight the educational potential of VR, as well as provide easy access to VR hardware within the heart of local communities.

“Public libraries provide safe, supportive environments that are available and welcoming to everyone,” says Oculus Education Program Manager Cindy Ball. “They help level the playing field by providing educational opportunities and access to technology that may not be readily available in the community households. Libraries share the love—at scale.”

 

 

 
© 2024 | Daniel Christian