Microsoft debuts Ideas in Word, a grammar and style suggestions tool powered by AI — from venturebeat.com by Kyle Wiggers; with thanks to Mr. Jack Du Mez for his posting on this over on LinkedIn

Excerpt:

The first day of Microsoft’s Build developer conference is typically chock-full of news, and this year was no exception. During a keynote headlined by CEO Satya Nadella, the Seattle company took the wraps off a slew of updates to Microsoft 365, its lineup of productivity-focused, cloud-hosted software and subscription services. Among the highlights were a new AI-powered grammar and style checker in Word Online, dubbed Ideas in Word, and dynamic email messages in Outlook Mobile.

Ideas in Word builds on Editor, an AI-powered proofreader for Office 365 that was announced in July 2016 and replaced the Spelling & Grammar pane in Office 2016 later that year. Ideas in Words similarly taps natural language processing and machine learning to deliver intelligent, contextually aware suggestions that could improve a document’s readability. For instance, it’ll recommend ways to make phrases more concise, clear, and inclusive, and when it comes across a particularly tricky snippet, it’ll put forward synonyms and alternative phrasings.

 

Also see:

 

 

Helvetica, the world’s most famous typeface, gets a makeover — from fastcompany.com by Mark Wilson
Helvetica is one of the most popular typefaces on the planet. Here’s why Monotype decided to remake it.

Excerpt:

Helvetica Now is the product of two dozen type designers, and when you see everything it can do, you’ll see why. First and foremost, Helvetica Now offers three separate “masters” (or three separate Helvetica variations) for various use cases. Its “Micro” version is for small screens. “Display” is for signage. And “Text” is for more standard sizes in written materials. Each of these options will cause the letters to be both drawn and spaced differently.

 

Also see:

Bauhaus architecture and design from A to Z

Bauhaus architecture and design from A to Z — from dezeen.com by Tom Ravenscroft

Excerpt:

To conclude our Bauhaus 100 series, celebrating the centenary of the hugely influential design school, we round out everything you need to know about the Bauhaus, from A to Z.

 

 

 

Legal Battle Over Captioning Continues — from insidehighered.com by Lindsay McKenzie
A legal dispute over video captions continues after court rejects requests by MIT and Harvard University to dismiss lawsuits accusing them of discriminating against deaf people.

Excerpt:

Two high-profile civil rights lawsuits filed by the National Association of the Deaf against Harvard University and the Massachusetts Institute of Technology are set to continue after requests to dismiss the cases were recently denied for the second time.

The two universities were accused by the NAD in 2015 of failing to make their massive open online courses, guest lectures and other video content accessible to people who are deaf or hard of hearing.

Some of the videos, many of which were hosted on the universities’ YouTube channels, did have captions — but the NAD complained that these captions were sometimes so bad that the content was still inaccessible.

Spokespeople for both Harvard and MIT declined to comment on the ongoing litigation but stressed that their institutions were committed to improving web accessibility.

 

 

From DSC:
First of all, an article:

The four definitive use cases for AR and VR in retail — from forbes.com by Nikki Baird

AR in retail

Excerpt (emphasis DSC):

AR is the go-to engagement method of choice when it comes to product and category exploration. A label on a product on a shelf can only do so much to convey product and brand information, vs. AR, which can easily tap into a wealth of digital information online and bring it to life as an overlay on a product or on the label itself.

 

From DSC:
Applying this concept to the academic world…what might this mean for a student in a chemistry class who has a mobile device and/or a pair of smart goggles on and is working with an Erlenmeyer flask? A burette? A Bunsen burner?

Along these lines...what if all of those confused students — like *I* was struggling through chem lab — could see how an experiment was *supposed to be done!?*

That is, if there’s only 30 minutes of lab time left, the professor or TA could “flip a switch” to turn on the AR cloud within the laboratory space to allow those struggling students to see how to do their experiment.

I can’t tell you how many times I was just trying to get through the lab — not knowing what I was doing, and getting zero help from any professor or TA. I hardly learned a thing that stuck with me…except the names of a few devices and the abbreviations of a few chemicals. For the most part, it was a waste of money. How many students experience this as well and feel like I did?

Will the terms “blended learning” and/or “hybrid learning” take on whole new dimensions with the onset of AR, MR, and VR-related learning experiences?

#IntelligentTutoring #IntelligentSystems #LearningExperiences
#AR #VR #MR #XR #ARCloud #AssistiveTechnologies
#Chemistry #BlendedLearning #HybridLearning #DigitalLearning

 

Also see:

 

“It is conceivable that we’re going to be moving into a world without screens, a world where [glasses are] your screen. You don’t need any more form factor than [that].”

(AT&T CEO)

 

 

 

From DSC:
Our family uses AT&T for our smartphones and for our Internet access. What I would really like from AT&T is to be able to speak into an app — either located on a smartphone or have their routers morph into Alexa-type of devices — to be able to speak to what I want my router to do:

“Turn off Internet access tonight from 9pm until 6am tomorrow morning.”
“Only allow Internet access for parents’ accounts.”
“Upgrade my bandwidth for the next 2 hours.”

Upon startup, the app would ask whether I wanted to setup any “admin” types of accounts…and, if so, would recognize that voice/those voices as having authority and control over the device.

Would you use this type of interface? I know I would!

P.S. I’d like to be able to speak to our
thermostat in that sort of way as well.

 

7 Things You Should Know About Accessibility Policy — from library.educause.edu

Excerpt:

Websites from the Accessible Technology Initiative (ATI) of the California State University, Penn State, the University of Virginia, and the Web Accessibility Initiative feature rich content related to IT accessibility policies. A California State University memorandum outlines specific responsibilities and reporting guidelines in support of CSU’s Policy on Disability Support and Accommodations. Cornell University developed a multiyear “Disability Access Management Strategic Plan.” Specific examples of accessibility policies focused on electronic communication and information technology can be found at Penn State, Purdue University, Yale University, and the University of Wisconsin– Madison. Having entered into a voluntary agreement with the National Federation of the Blind to improve accessibility, Wichita State University offers substantial accessibility-related resources for its community, including specific standards for ensuring accessibility in face-to face instruction.

 

 

The 10+ best real-world examples of augmented reality — from forbes.com by Bernard Marr

Excerpt:

Augmented reality (AR) can add value, solve problems and enhance the user experience in nearly every industry. Businesses are catching on and increasing investments to drive the growth of augmented reality, which makes it a crucial part of the tech economy.

 

As referenced by Bernard in his above article:

 

 

From DSC:
Along these lines, I really appreciate the “translate” feature within Twitter. It helps open up whole new avenues of learning for me from people across the globe. A very cool, practical, positive, beneficial feature/tool!!!

 

 

From DSC:
The article below relays some interesting thoughts on what an alternative syllabus could look like. It kind of reminds me of a digital playlist…

Looking For Syllabus 2.0 — from usv.com by Dani Grant

Excerpt:

There have been several attempts already to curate online resources for learning new topics. Usually they take the form of a list of links. The problem with the list of links approach is that they are static and they are inefficient. You don’t need to read a whole link to get the main point, you want to curate little bits and pieces of open resources: 30 seconds of this podcast, a minute and a half from this youtube video, just these 4 paragraphs from this article.

The thing that is closest to a modern internet syllabi is Susan Fowler’s guide for learning physics (it’s really amazing, go check it out). What if you could have that type of curated guide for many topics that gets updated by the community over time, with inline discussion with other learners?

I think Syllabus 2.0 could look something like this:

We’ve created a sample syllabus for this last topic so you can see what we envision in action. It curates 8 hours of podcasts, talks and blog posts into a 30 minute guide.

 

 

 

Presentation Translator for PowerPoint — from Microsoft (emphasis below from DSC:)

Presentation Translator breaks down the language barrier by allowing users to offer live, subtitled presentations straight from PowerPoint. As you speak, the add-in powered by the Microsoft Translator live feature, allows you to display subtitles directly on your PowerPoint presentation in any one of more than 60 supported text languages. This feature can also be used for audiences who are deaf or hard of hearing.

 

Additionally, up to 100 audience members in the room can follow along with the presentation in their own language, including the speaker’s language, on their phone, tablet or computer.

 

From DSC:
Up to 100 audience members in the room can follow along with the presentation in their own language! Wow!

Are you thinking what I’m thinking?! If this could also address learners and/or employees outside the room as well, this could be an incredibly powerful piece of a next generation, global learning platform! 

Automatic translation with subtitles — per the learner’s or employee’s primary language setting as established in their cloud-based learner profile. Though this posting is not about blockchain, the idea of a cloud-based learner profile reminds me of the following graphic I created in January 2017.

A couple of relevant quotes here:

A number of players and factors are changing the field. Georgia Institute of Technology calls it “at-scale” learning; others call it the “mega-university” — whatever you call it, this is the advent of the very large, 100,000-plus-student-scale online provider. Coursera, edX, Udacity and FutureLearn (U.K.) are among the largest providers. But individual universities such as Southern New Hampshire, Arizona State and Georgia Tech are approaching the “at-scale” mark as well. One could say that’s evidence of success in online learning. And without question it is.

But, with highly reputable programs at this scale and tuition rates at half or below the going rate for regional and state universities, the impact is rippling through higher ed. Georgia Tech’s top 10-ranked computer science master’s with a total expense of less than $10,000 has drawn more than 10,000 qualified majors. That has an impact on the enrollment at scores of online computer science master’s programs offered elsewhere. The overall online enrollment is up, but it is disproportionately centered in affordable scaled programs, draining students from the more expensive, smaller programs at individual universities. The dominoes fall as more and more high-quality at-scale programs proliferate.

— Ray Schroeder

 

 

Education goes omnichannel. In today’s connected world, consumers expect to have anything they want available at their fingertips, and education is no different. Workers expect to be able to learn on-demand, getting the skills and knowledge they need in that moment, to be able to apply it as soon as possible. Moving fluidly between working and learning, without having to take time off to go to – or back to – school will become non-negotiable.

Anant Agarwal

 

From DSC:
Is there major change/disruption ahead? Could be…for many, it can’t come soon enough.

 

 

Smart speakers hit critical mass in 2018 — from techcrunch.com by Sarah Perez

Excerpt (emphasis DSC):

We already know Alexa had a good Christmas — the app shot to the top of the App Store over the holidays, and the Alexa service even briefly crashed from all the new users. But Alexa, along with other smart speaker devices like Google Home, didn’t just have a good holiday — they had a great year, too. The smart speaker market reached critical mass in 2018, with around 41 percent of U.S. consumers now owning a voice-activated speaker, up from 21.5 percent in 2017.

 

In the U.S., there are now more than 100 million Alexa-enabled devices installed — a key milestone for Alexa to become a “critical mass platform,” the report noted.

 

 

On one hand XR-related technologies
show some promise and possibilities…

 

The AR Cloud will infuse meaning into every object in the real world — from venturebeat.com by Amir Bozorgzadeh

Excerpt:

Indeed, if you haven’t yet heard of the “AR Cloud”, it’s time to take serious notice. The term was coined by Ori Inbar, an AR entrepreneur and investor who founded AWE. It is, in his words, “a persistent 3D digital copy of the real world to enable sharing of AR experiences across multiple users and devices.”

 

Augmented reality invades the conference room — from zdnet.com by Ross Rubin
Spatial extends the core functionality of video and screen sharing apps to a new frontier.

 

 

The 5 most innovative augmented reality products of 2018 — from next.reality.news by Adario Strange

 

 

Augmented, virtual reality major opens at Shenandoah U. next fall — from edscoop.com by by Betsy Foresman

Excerpt:

“It’s not about how virtual reality functions. It’s about, ‘How does history function in virtual reality? How does biology function in virtual reality? How does psychology function with these new tools?’” he said.

The school hopes to prepare student for careers in a field with a market size projected to grow to $209.2 billion by 2022, according to Statista. Still at its advent, Whelan compared VR technology to the introduction of the personal computer.

 

VR is leading us into the next generation of sports media — from venturebeat.com by Mateusz Przepiorkowski

 

 

Accredited surgery instruction now available in VR — from zdnet.com by Greg Nichols
The medical establishment has embraced VR training as a cost-effective, immersive alternative to classroom time.

 

Toyota is using Microsoft’s HoloLens to build cars faster — from cnn.comby Rachel Metz

From DSC:
But even in that posting the message is mixed…some pros…some cons. Some things going well for XR-related techs…but for other things, things are not going very well.

 

 

…but on the other hand,
some things don’t look so good…

 

Is the Current Generation of VR Already Dead? — from medium.com by Andreas Goeldi

Excerpt:

Four years later, things are starting to look decidedly bleak. Yes, there are about 5 million Gear VR units and 3 million Sony Playstation VR headsets in market, plus probably a few hundred thousand higher-end Oculus and HTC Vive systems. Yes, VR is still being demonstrated at countless conferences and events, and big corporations that want to seem innovative love to invest in a VR app or two. Yes, Facebook just cracked an important low-end price point with its $200 Oculus Go headset, theoretically making VR affordable for mainstream consumers. Plus, there’s even more hype about Augmented Reality, which in a way could be a gateway drug to VR.

But it’s hard to ignore a growing feeling that VR is not developing as the industry hoped it would. So is that it again, we’ve seen this movie before, let’s all wrap it up and wait for the next wave of VR to come along about five years from now?

There are a few signs that are really worrying…

 

 

From DSC:
My take is that it’s too early to tell. We need to give things more time.

 

 

 

The WT2 in-ear translator will be available in January, real-time feedback soon — from wearable-technologies.com by Cathy Russey

Excerpt:

Shenzhen, China & Pasadena, CA-based startup Timekettle wants to solve the language barrier problem. So, the company developed WT2 translator – an in-ear translator for real-time, natural and hands-free communication. The company just announced they’ll be shipping the new translator in January, 2019.

 

 

 

The information below is from Heather Campbell at Chegg
(emphasis DSC)


 

Chegg Math Solver is an AI-driven tool to help the student understand math. It is more than just a calculator – it explains the approach to solving the problem. So, students won’t just copy the answer but understand and can solve similar problems at the same time. Most importantly,students can dig deeper into a problem and see why it’s solved that way. Chegg Math Solver.

In every subject, there are many key concepts and terms that are crucial for students to know and understand. Often it can be hard to determine what the most important concepts and terms are for a given subject, and even once you’ve identified them you still need to understand what they mean. To help you learn and understand these terms and concepts, we’ve provided thousands of definitions, written and compiled by Chegg experts. Chegg Definition.

 

 

 

 

 


From DSC:
I see this type of functionality as a piece of a next generation learning platform — a piece of the Living from the Living [Class] Room type of vision. Great work here by Chegg!

Likely, students will also be able to take pictures of their homework, submit it online, and have that image/problem analyzed for correctness and/or where things went wrong with it.

 

 


 

 

Alexa, get me the articles (voice interfaces in academia) — from blog.libux.co by Kelly Dagan

Excerpt:

Credit to Jill O’Neill, who has written an engaging consideration of applications, discussions, and potentials for voice-user interfaces in the scholarly realm. She details a few use case scenarios: finding recent, authoritative biographies of Jane Austen; finding if your closest library has an item on the shelf now (and whether it’s worth the drive based on traffic).

Coming from an undergraduate-focused (and library) perspective, I can think of a few more:

  • asking if there are any group study rooms available at 7 pm and making a booking
  • finding out if [X] is open now (Archives, the Cafe, the Library, etc.)
  • finding three books on the Red Brigades, seeing if they are available, and saving the locations
  • grabbing five research articles on stereotype threat, to read later

 

Also see:

 

 

 

Virtual digital assistants in the workplace: Still nascent, but developing — from cisco.com by Pat Brans
As workers get overwhelmed with daily tasks, they want virtual digital assistants in the workplace that can alleviate some of the burden.

Excerpts:

As life gets busier, knowledge workers are struggling with information overload.

They’re looking for a way out, and that way, experts say, will eventually involve virtual digital assistants (VDAs). Increasingly, workers need to complete myriad tasks, often seemingly simultaneously. And as the pace of business continues to drive ever faster, hands-free, intelligent technology that can speed administrative tasks holds obvious appeal.

So far, scenarios in which digital assistants in the workplace enhance productivity fall into three categories: scheduling, project management, and improved interfaces to enterprise applications. “Using digital assistants to perform scheduling has clear benefits,” Beccue said.

“Scheduling meetings and managing calendars takes a long time—many early adopters are able to quantify the savings they get when the scheduling is performed by a VDA. Likewise, when VDAs are used to track project status through daily standup meetings, project managers can easily measure the time saved.”

 

Perhaps the most important change we’ll see in future generations of VDA technology for workforce productivity will be the advent of general-purpose VDAs that help users with all tasks. These VDAs will be multi-channel (providing interfaces through mobile apps, messaging, telephone, and so on) and they will be bi-modal (enlisting text and voice).

 

 

 

 

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

© 2019 | Daniel Christian