Watch Salvador Dalí Return to Life Through AI — from interestingengineering.com by
The Dalí Museum has created a deepfake of surrealist artist Salvador Dalí that brings him back to life.

Excerpt:

The Dalí Museum has created a deepfake of surrealist artist Salvador Dalí that brings him back to life. This life-size deepfake is set up to have interactive discussions with visitors.

The deepfake can produce 45 minutes of content and 190,512 possible combinations of phrases and decisions taken by the fake but realistic Dalí. The exhibition was created by Goodby, Silverstein & Partners using 6,000 frames of Dalí taken from historic footage and 1,000 hours of machine learning.

 

From DSC:
While on one hand, incredible work! Fantastic job! On the other hand, if this type of deepfake can be done, how can any video be trusted from here on out? What technology/app will be able to confirm that a video is actually that person, actually saying those words?

Will we get to a point that says, this is so and so, and I approved this video. Or will we have an electronic signature? Will a blockchain-based tech be used? I don’t know…there always seems to be pros and cons to any given technology. It’s how we use it. It can be a dream, or it can be a nightmare.

 

 

After nearly a decade of Augmented World Expo (AWE), founder Ori Inbar unpacks the past, present, & future of augmented reality — from next.reality.news by Adario Strange

Excerpts:

I think right now it’s almost a waste of time to talk about a hybrid device because it’s not relevant. It’s two different devices and two different use cases. But like you said, sometime in the future, 15, 20, 50 years, I imagine a point where you could open your eyes to do AR, and close your eyes to do VR.

I think there’s always room for innovation, especially with spatial computing where we’re in the very early stages. We have to develop a new visual approach that I don’t think we have yet. What does it mean to interact in a world where everything is visual and around you, and not on a two-dimensional screen? So there’s a lot to do there.

 

A big part of mainstream adoption is education. Until you get into AR and VR, you don’t really know what you’re missing. You can’t really learn about it from videos. And that education takes time. So the education, plus the understanding of the need, will create a demand.

— Ori Inbar

 

 

The Common Sense Census: Inside the 21st-Century Classroom

21st century classroom - excerpt from infographic

Excerpt:

Technology has become an integral part of classroom learning, and students of all ages have access to digital media and devices at school. The Common Sense Census: Inside the 21st-Century Classroom explores how K–12 educators have adapted to these critical shifts in schools and society. From the benefits of teaching lifelong digital citizenship skills to the challenges of preparing students to critically evaluate online information, educators across the country share their perspectives on what it’s like to teach in today’s fast-changing digital world.

 

 

From Google: New AR features in Search rolling out later this month.

 

 

Along these lines, see:

 

 

DARPA is reportedly eyeing a high-tech contact lens straight out of ‘Mission: Impossible’ — from taskandpurpose.com by Jared Keller

 

Just because we can...does not mean we should.

Excerpt:

The Defense Advanced Research Projects Agency (DARPA) is reportedly interested in a new wirelessly-connected contact lens recently unveiled in France, the latest in the agency’s ongoing search for small-scale technology to augment U.S. service members’ visual capabilities in the field.

 

From DSC:
We may not be there yet (and in my mind, that’s a good thing). But when this tech gets further developed and gets its foot in the door — military style — it may then expand its reach and scope. Then it gets integrated into other areas of society. If many people were very uncomfortable having someone walk in a public place wearing/using a pair of Google Glasses, how will they/we feel about this one? Speaking for myself, I don’t like it.

 
 

The finalized 2019 Horizon Report Higher Education Edition (from library.educause.edu) was just released on 4/23/19.

Excerpt:

Key Trends Accelerating Technology Adoption in Higher Education:

Short-TermDriving technology adoption in higher education for the next one to two years

  • Redesigning Learning Spaces
  • Blended Learning Designs

Mid-TermDriving technology adoption in higher education for the next three to five years

  • Advancing Cultures of Innovation
  • Growing Focus on Measuring Learning

Long-TermDriving technology adoption in higher education for five or more years

  • Rethinking How Institutions Work
  • Modularized and Disaggregated Degrees

 

 

Minerva’s Innovative Platform Makes Quality Higher Ed Personal and Affordable — from linkedin.com by Tom Vander Ark

Excerpt:

The first external partner, the Hong Kong University of Science and Technology (HKUST), loved the course design and platform but told Nelson they couldn’t afford to teach 15 students at a time. The Minerva team realized that to be applicable at major universities, active learning needed to be scalable.

Starting this summer, a new version of Forum will be available for classes of up to 400 at a time. For students, it will still feel like a small seminar. They’ll see the professor, themselves, and a dozen other students. Forum will manage the movement of students from screen to screen. “Everybody thinks they are in the main room,” said Nelson.

Forum enables real-time polling and helps professors create and manage breakout groups.

Big Implications
With Forum, “For the first time you can deliver better than Ivy League education at absurdly low cost,” said Nelson.

Online courses and MOOCs just repackaged the same format and just offered it with less interaction. As new Forum partners will demonstrate, “It’s possible to deliver a year of undergraduate education that is vastly superior for under $5,000 per student,” added Nelson.

He’s excited to offer a turnkey university solution that, for partners like Oxford Teachers Academy, will allow new degree pathways for paraprofessionals that can work, learn, and earn a degree and certification.

 

Perhaps another piece of the puzzle is falling into place…

 

Another piece of the puzzle is coming into place...for the Learning from the Living Class Room vision

 

 

We Built an ‘Unbelievable’ (but Legal) Facial Recognition Machine — from nytimes.com by Sahil Chinoy

“The future of human flourishing depends upon facial recognition technology being banned,” wrote Woodrow Hartzog, a professor of law and computer science at Northeastern, and Evan Selinger, a professor of philosophy at the Rochester Institute of Technology, last year. ‘Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited.’ Facial recognition is categorically different from other forms of surveillance, Mr. Hartzog said, and uniquely dangerous. Faces are hard to hide and can be observed from far away, unlike a fingerprint. Name and face databases of law-abiding citizens, like driver’s license records, already exist. And for the most part, facial recognition surveillance can be set up using cameras already on the streets.” — Sahil Chinoy; per a weekly e-newsletter from Sam DeBrule at Machine Learnings in Berkeley, CA

Excerpt:

Most people pass through some type of public space in their daily routine — sidewalks, roads, train stations. Thousands walk through Bryant Park every day. But we generally think that a detailed log of our location, and a list of the people we’re with, is private. Facial recognition, applied to the web of cameras that already exists in most cities, is a threat to that privacy.

To demonstrate how easy it is to track people without their knowledge, we collected public images of people who worked near Bryant Park (available on their employers’ websites, for the most part) and ran one day of footage through Amazon’s commercial facial recognition service. Our system detected 2,750 faces from a nine-hour period (not necessarily unique people, since a person could be captured in multiple frames). It returned several possible identifications, including one frame matched to a head shot of Richard Madonna, a professor at the SUNY College of Optometry, with an 89 percent similarity score. The total cost: about $60.

 

 

 

 

From DSC:
What do you think about this emerging technology and its potential impact on our society — and on other societies like China? Again I ask…what kind of future do we want?

As for me, my face is against the use of facial recognition technology in the United States — as I don’t trust where this could lead.

This wild, wild, west situation continues to develop. For example, note how AI and facial recognition get their foot in the door via techs installed years ago:

The cameras in Bryant Park were installed more than a decade ago so that people could see whether the lawn was open for sunbathing, for example, or check how busy the ice skating rink was in the winter. They are not intended to be a security device, according to the corporation that runs the park.

So Amazon’s use of facial recognition is but another foot in the door. 

This needs to be stopped. Now.

 

Facial recognition technology is a menace disguised as a gift. It’s an irresistible tool for oppression that’s perfectly suited for governments to display unprecedented authoritarian control and an all-out privacy-eviscerating machine.

We should keep this Trojan horse outside of the city. (source)

 

 

Legal Battle Over Captioning Continues — from insidehighered.com by Lindsay McKenzie
A legal dispute over video captions continues after court rejects requests by MIT and Harvard University to dismiss lawsuits accusing them of discriminating against deaf people.

Excerpt:

Two high-profile civil rights lawsuits filed by the National Association of the Deaf against Harvard University and the Massachusetts Institute of Technology are set to continue after requests to dismiss the cases were recently denied for the second time.

The two universities were accused by the NAD in 2015 of failing to make their massive open online courses, guest lectures and other video content accessible to people who are deaf or hard of hearing.

Some of the videos, many of which were hosted on the universities’ YouTube channels, did have captions — but the NAD complained that these captions were sometimes so bad that the content was still inaccessible.

Spokespeople for both Harvard and MIT declined to comment on the ongoing litigation but stressed that their institutions were committed to improving web accessibility.

 

 

From DSC:
First of all, an article:

The four definitive use cases for AR and VR in retail — from forbes.com by Nikki Baird

AR in retail

Excerpt (emphasis DSC):

AR is the go-to engagement method of choice when it comes to product and category exploration. A label on a product on a shelf can only do so much to convey product and brand information, vs. AR, which can easily tap into a wealth of digital information online and bring it to life as an overlay on a product or on the label itself.

 

From DSC:
Applying this concept to the academic world…what might this mean for a student in a chemistry class who has a mobile device and/or a pair of smart goggles on and is working with an Erlenmeyer flask? A burette? A Bunsen burner?

Along these lines...what if all of those confused students — like *I* was struggling through chem lab — could see how an experiment was *supposed to be done!?*

That is, if there’s only 30 minutes of lab time left, the professor or TA could “flip a switch” to turn on the AR cloud within the laboratory space to allow those struggling students to see how to do their experiment.

I can’t tell you how many times I was just trying to get through the lab — not knowing what I was doing, and getting zero help from any professor or TA. I hardly learned a thing that stuck with me…except the names of a few devices and the abbreviations of a few chemicals. For the most part, it was a waste of money. How many students experience this as well and feel like I did?

Will the terms “blended learning” and/or “hybrid learning” take on whole new dimensions with the onset of AR, MR, and VR-related learning experiences?

#IntelligentTutoring #IntelligentSystems #LearningExperiences
#AR #VR #MR #XR #ARCloud #AssistiveTechnologies
#Chemistry #BlendedLearning #HybridLearning #DigitalLearning

 

Also see:

 

“It is conceivable that we’re going to be moving into a world without screens, a world where [glasses are] your screen. You don’t need any more form factor than [that].”

(AT&T CEO)

 

 

Skills gap? Augmented reality can beam in expertise across the enterprise — from by Greg Nichols
Hives of subject matter experts could man augmented reality switchboards, transferring knowledge to field.

Excerpt:

Some 10 million manufacturing jobs will likely be needed in the coming decade, yet many of those will likely go unfilled, according to Deloitte and the Manufacturing Institute. Somewhat ironically, one of the biggest factors holding back a strong American manufacturing segment in 2019 may not be cheap foreign labor but unqualified U.S. labor.

Augmented reality, which is still trying to find its stride in the enterprise, could help by serving as a conduit for on-the-job knowledge transfer.

“We are excited to offer industrial enterprises a new way to use AR to leverage the tribal knowledge of subject matter experts (SMEs) and help alleviate the skills gap crisis threatening today’s industrial enterprise,” says Mike Campbell, EVP, augmented reality products, PTC.

 

From DSC:
First a posting that got me to wondering about something that I’ve previously wondered about from time to time…

College of Business unveils classroom of the future — from biz.source.colostate.edu by Joe Giordano

Excerpt:

Equipped with a wall of 27 high-definition video screens as well as five high-end cameras, the newest classroom in Colorado State University’s College of Business is designed to connect on-campus and online students in a whole new way.

The College of Business unveiled on March 29 the “Room of the Future,” featuring Mosaic, an innovative technology – powered by mashme.io – that creates a blended classroom experience, connecting on-campus and online students in real time.

 

From DSC:
If the pedagogies could be worked out, this could be a very attractive model for many people in the future as it:

  • Provides convenience.
  • Offers more choice. More control. (Students could pick whether they want to attend the class virtually or in a physical classroom).

If the resulting increase in students could bring down the price of offering the course, will we see this model flourish in the near future? 

For struggling colleges and universities, could this help increase the ROI of offering their classes on their physical campuses?

The technologies behind this are not cheap though…and that could be a show-stopper for this type of an experiment. But…thinking out loud again…what if there were a cheaper way to view a group of other people in your learning community? Perhaps there will be a solution using some form of Extended Reality (XR)…hmmm….

 

 

 

 

 

 

 

 

Also see:

 

Also see:

Learning from the Living Class Room

 

 

Cambridge library installation gives readers control of their sensory space — from cambridge.wickedlocal.com by Hannah Schoenbaum

Excerpts:

A luminous igloo-shaped structure in the front room of the Cambridge Public Library beckoned curious library visitors during the snowy first weekend of March, inviting them to explore a space engineered for everyone, yet uniquely their own.

Called “Alterspace” and developed by Harvard’s metaLAB and Library Innovation Lab, this experiment in adaptive architecture granted the individual control over the sensory elements in his or her space. A user enters the LED-illuminated dome to find headphones, chairs and an iPad on a library cart, which displays six modes: Relax, Read, Meditate, Focus, Create and W3!Rd.

From the cool blues and greens of Relax mode to a rainbow overload of excitement in the W3!Rd mode, Alterspace is engineered to transform its lights, sounds and colors into the ideal environment for a particular action.

 

 

From DSC:
This brings me back to the question/reflection…in the future, will students using VR headsets be able to study by a brook? An ocean? In a very quiet library (i.e., the headset would come with solid noise cancellation capabilities build into it)?  This type of room/capability would really be helpful for our daughter…who is easily distracted and doesn’t like noise.

 

 
© 2024 | Daniel Christian