Presentation Translator for PowerPoint — from Microsoft (emphasis below from DSC:)

Presentation Translator breaks down the language barrier by allowing users to offer live, subtitled presentations straight from PowerPoint. As you speak, the add-in powered by the Microsoft Translator live feature, allows you to display subtitles directly on your PowerPoint presentation in any one of more than 60 supported text languages. This feature can also be used for audiences who are deaf or hard of hearing.

 

Additionally, up to 100 audience members in the room can follow along with the presentation in their own language, including the speaker’s language, on their phone, tablet or computer.

 

From DSC:
Up to 100 audience members in the room can follow along with the presentation in their own language! Wow!

Are you thinking what I’m thinking?! If this could also address learners and/or employees outside the room as well, this could be an incredibly powerful piece of a next generation, global learning platform! 

Automatic translation with subtitles — per the learner’s or employee’s primary language setting as established in their cloud-based learner profile. Though this posting is not about blockchain, the idea of a cloud-based learner profile reminds me of the following graphic I created in January 2017.

A couple of relevant quotes here:

A number of players and factors are changing the field. Georgia Institute of Technology calls it “at-scale” learning; others call it the “mega-university” — whatever you call it, this is the advent of the very large, 100,000-plus-student-scale online provider. Coursera, edX, Udacity and FutureLearn (U.K.) are among the largest providers. But individual universities such as Southern New Hampshire, Arizona State and Georgia Tech are approaching the “at-scale” mark as well. One could say that’s evidence of success in online learning. And without question it is.

But, with highly reputable programs at this scale and tuition rates at half or below the going rate for regional and state universities, the impact is rippling through higher ed. Georgia Tech’s top 10-ranked computer science master’s with a total expense of less than $10,000 has drawn more than 10,000 qualified majors. That has an impact on the enrollment at scores of online computer science master’s programs offered elsewhere. The overall online enrollment is up, but it is disproportionately centered in affordable scaled programs, draining students from the more expensive, smaller programs at individual universities. The dominoes fall as more and more high-quality at-scale programs proliferate.

— Ray Schroeder

 

 

Education goes omnichannel. In today’s connected world, consumers expect to have anything they want available at their fingertips, and education is no different. Workers expect to be able to learn on-demand, getting the skills and knowledge they need in that moment, to be able to apply it as soon as possible. Moving fluidly between working and learning, without having to take time off to go to – or back to – school will become non-negotiable.

Anant Agarwal

 

From DSC:
Is there major change/disruption ahead? Could be…for many, it can’t come soon enough.

 

 

Deep learning turns mono recordings into immersive sound — from technologyreview.com by Emerging Technology from the arXiv

Excerpt:

We’ve had 3D images for decades, but effectively imitating 3D sound has always eluded researchers. Now a machine-learning algorithm can produce “2.5D” sound by watching a video.

 

Facial recognition has to be regulated to protect the public, says AI report — from technologyreview.com by Will Knight
The research institute AI Now has identified facial recognition as a key challenge for society and policymakers—but is it too late?

Excerpt (emphasis DSC):

Artificial intelligence has made major strides in the past few years, but those rapid advances are now raising some big ethical conundrums.

Chief among them is the way machine learning can identify people’s faces in photos and video footage with great accuracy. This might let you unlock your phone with a smile, but it also means that governments and big corporations have been given a powerful new surveillance tool.

A new report from the AI Now Institute (large PDF), an influential research institute based in New York, has just identified facial recognition as a key challenge for society and policymakers.

 

Also see:

EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability. As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due process – is an increasingly urgent concern.

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central
problem and addresses the following key issues:

  1. The growing accountability gap in AI, which favors those who create and deploy these
    technologies at the expense of those most affected
  2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial
    and affect recognition, increasing the potential for centralized control and oppression
  3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
  4. Unregulated and unmonitored forms of AI experimentation on human populations
  5. The limits of technological solutions to problems of fairness, bias, and discrimination

Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.

 

 

From DSC:
As I said in this posting, we need to be aware of the emerging technologies around us. Just because we can, doesn’t mean we should. People need to be aware of — and involved with — which emerging technologies get rolled out (or not) and/or which features are beneficial to roll out (or not).

One of the things that’s beginning to alarm me these days is how the United States has turned over the keys to the Maserati — i.e., think an expensive, powerful thing — to youth who lack the life experiences to know how to handle such power and, often, the proper respect for such power. Many of these youthful members of our society don’t own the responsibility for the positive and negative influences and impacts that such powerful technologies can have (and the more senior execs have not taken enough responsibility either)!

If you owned the car below, would you turn the keys of this ~$137,000+ car over to your 16-25 year old? Yet that’s what America has been doing for years. And, in some areas, we’re now paying the price.

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

The corporate world continues to discard the hard-earned experience that age brings…as they shove older people out of the workforce. (I hesitate to use the word wisdom…but in some cases, that’s also relevant/involved here.) Then we, as a society, sit back and wonder how did we get to this place?

Even technologists and programmers in their 20’s and 30’s are beginning to step back and ask…WHY did we develop this application or that feature? Was it — is it — good for society? Is it beneficial? Or should it be tabled or revised into something else?

Below is but one example — though I don’t mean to pick on Microsoft, as they likely have more older workers than the Facebooks, Googles, or Amazons of the world. I fully realize that all of these companies have some older employees. But the youth-oriented culture in American today has almost become an obsession — and not just in the tech world. Turn on the TV, check out the new releases on Netflix, go see a movie in a theater, listen to the radio, cast but a glance at the magazines in the check out lines, etc. and you’ll instantly know
what I mean.

In the workplace, there appears to be a bias against older employees as being less innovative or tech-savvy — such a perspective is often completely incorrect. Go check out LinkedIn for items re: age discrimination…it’s a very real thing. But many of us over the age of 30 know this to be true if we’ve lost a job in the last decade or two and have tried to get a job that involves technology.

 

Microsoft argues facial-recognition tech could violate your rights — from finance.yahoo.com by Rob Pegoraro

Excerpt (emphasis DSC):

On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN) Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result: 28 members were mistakenly matched with 28 suspects.

The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smith posted an unusual plea on the company’s blog asking that the development of facial-recognition systems not be left up to tech companies.

Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

But we may not get new laws anytime soon.

 

just because we can does not mean we should

 

Just because we can…

 

just because we can does not mean we should

 

 

On one hand XR-related technologies
show some promise and possibilities…

 

The AR Cloud will infuse meaning into every object in the real world — from venturebeat.com by Amir Bozorgzadeh

Excerpt:

Indeed, if you haven’t yet heard of the “AR Cloud”, it’s time to take serious notice. The term was coined by Ori Inbar, an AR entrepreneur and investor who founded AWE. It is, in his words, “a persistent 3D digital copy of the real world to enable sharing of AR experiences across multiple users and devices.”

 

Augmented reality invades the conference room — from zdnet.com by Ross Rubin
Spatial extends the core functionality of video and screen sharing apps to a new frontier.

 

 

The 5 most innovative augmented reality products of 2018 — from next.reality.news by Adario Strange

 

 

Augmented, virtual reality major opens at Shenandoah U. next fall — from edscoop.com by by Betsy Foresman

Excerpt:

“It’s not about how virtual reality functions. It’s about, ‘How does history function in virtual reality? How does biology function in virtual reality? How does psychology function with these new tools?’” he said.

The school hopes to prepare student for careers in a field with a market size projected to grow to $209.2 billion by 2022, according to Statista. Still at its advent, Whelan compared VR technology to the introduction of the personal computer.

 

VR is leading us into the next generation of sports media — from venturebeat.com by Mateusz Przepiorkowski

 

 

Accredited surgery instruction now available in VR — from zdnet.com by Greg Nichols
The medical establishment has embraced VR training as a cost-effective, immersive alternative to classroom time.

 

Toyota is using Microsoft’s HoloLens to build cars faster — from cnn.comby Rachel Metz

From DSC:
But even in that posting the message is mixed…some pros…some cons. Some things going well for XR-related techs…but for other things, things are not going very well.

 

 

…but on the other hand,
some things don’t look so good…

 

Is the Current Generation of VR Already Dead? — from medium.com by Andreas Goeldi

Excerpt:

Four years later, things are starting to look decidedly bleak. Yes, there are about 5 million Gear VR units and 3 million Sony Playstation VR headsets in market, plus probably a few hundred thousand higher-end Oculus and HTC Vive systems. Yes, VR is still being demonstrated at countless conferences and events, and big corporations that want to seem innovative love to invest in a VR app or two. Yes, Facebook just cracked an important low-end price point with its $200 Oculus Go headset, theoretically making VR affordable for mainstream consumers. Plus, there’s even more hype about Augmented Reality, which in a way could be a gateway drug to VR.

But it’s hard to ignore a growing feeling that VR is not developing as the industry hoped it would. So is that it again, we’ve seen this movie before, let’s all wrap it up and wait for the next wave of VR to come along about five years from now?

There are a few signs that are really worrying…

 

 

From DSC:
My take is that it’s too early to tell. We need to give things more time.

 

 

 

 

From DSC:
When a professor walks into the room, the mobile device that the professor is carrying notifies the system to automatically establish his or her preferred settings for the room — and/or voice recognition allows a voice-based interface to adjust the room’s settings:

  • The lights dim to 50%
  • The projector comes on
  • The screen comes down
  • The audio is turned up to his/her liking
  • The LMS is logged into with his/her login info and launches the class that he/she is teaching at that time of day
  • The temperature is checked and adjusted if too high or low
  • Etc.
 

From DSC:
How long before voice drives most appliances, thermostats, etc?

Hisense is bringing Android and AI smarts to its 2019 TV range — from techradar.com by Stephen Lambrechts
Some big announcements planned for CES 2019

Excerpt (emphasis DSC):

Hisense has announced that it will unveil the next evolution of its VIDAA smart TV platform at CES 2019 next month, promising to take full advantage of artificial intelligence with version 3.0.

Each television in Hisense’s 2019 ULED TV lineup will boast the updated VIDAA 3.0 AI platform, with Amazon Alexa functionality fully integrated into the devices, meaning you won’t need an Echo device to use Alexa voice control features.

 

 

 

Virtual classes shouldn’t be cringeworthy. Here are 5 tips for teaching live online — from edsurge.com by Bonni Stachowiak (Columnist)

Excerpt:

Dear Bonni: I’m wanting to learn about best practices for virtual courses that are “live” (e.g., using a platform like Zoom). It differs both from face-to-face classroom learning and traditional (asynchronous) online courses. I’d love to know about resources addressing this learning format. —Keith Johnson. director of theological development at Cru. My team facilitates and teaches graduate-level theological courses for a non-profit.

Teaching a class by live video conference is quite different than being in person with a room full of students. But there are some approaches we can draw from traditional classrooms that work quite well in a live, online environment.

Here are some recommendations for virtual teaching…

 

 
 

A Space for Learning: A review of research on active learning spaces — from by Robert Talbert and Anat Mor-Avi

Abstract:
Active Learning Classrooms (ALCs) are learning spaces specially designed to optimize the practice of active learning and amplify its positive effects in learners from young children through university-level learners. As interest in and adoption of ALCs has increased rapidly over the last decade, the need for grounded research in their effects on learners and schools has grown proportionately. In this paper, we review the peer-reviewed published research on ALCs, dating back to the introduction of “studio” classrooms and the SCALE-UP program up to the present day. We investigate the literature and summarize findings on the effects of ALCs on learning outcomes, student engagement, and the behaviors and practices of instructors as well as the specific elements of ALC design that seem to contribute the most to these effects. We also look at the emerging cultural impact of ALCs on institutions of learning, and we examine the drawbacks of the published research as well as avenues for potential future research in this area.

 

1: Introduction
1.1: What is active learning, and what is an active learning classroom?
Active learning is defined broadly to include any pedagogical method that involves students actively working on learning tasks and reflecting on their work, apart from watching, listening, and taking notes (Bonwell & Eison, 1991). Active learning has taken hold as a normative instructional practice in K12 and higher education institutions worldwide. Recent studies, such as the 2014 meta-analysis linking active learning pedagogies with dramatically reduced failure rates in university-level STEM courses (Freeman et al., 2014) have established that active learning drives increased student learning and engagement across disciplines, grade levels, and demographics.

As schools, colleges, and universities increasingly seek to implement active learning, concerns about the learning spaces used for active learning have naturally arisen. Attempts to implement active learning pedagogies in spaces that are not attuned to the particular needs of active learning — for example, large lecture halls with fixed seating — have resulted in suboptimal results and often frustration among instructors and students alike. In an effort to link architectural design to best practices in active learning pedagogy, numerous instructors, school leaders, and architects have explored how learning spaces can be differently designed to support active learning and amplify its positive effects on student learning. The result is a category of learning spaces known as Active Learning Classrooms (ALCs).

While there is no universally accepted definition of an ALC, the spaces often described by this term have several common characteristics:

  • ALCs are classrooms, that is, formal spaces in which learners convene for educational activities. We do not include less-formal learning spaces such as faculty offices, library study spaces, or “in-between” spaces located in hallways or foyers.
  • ALCs include deliberate architectural and design attributes that are specifically intended to promote active learning. These typically include moveable furniture that can be reconfigured into a variety of different setups with ease, seating that places students in small groups, plentiful horizontal and/or vertical writing surfaces such as whiteboards, and easy access to learning
    technologies (including technological infrastructure such as power outlets).
  • In particular, most ALCs have a “polycentric” or “acentric” design in which there is no clearly-defined front of the room by default. Rather, the instructor has a station which is either
    movable or located in an inconspicuous location so as not to attract attention; or perhaps there is no specific location for the instructor.
  • Finally, ALCs typically provide easy access to digital and analog tools for learning , such as multiple digital projectors, tablet or laptop computers, wall-mounted and personal whiteboards, or classroom response systems.

2.1: Research questions
The main question that this study intends to investigate is: What are the effects of the use of ALCs on student learning, faculty teaching, and institutional cultures? Within this broad overall question, we will focus on four research questions:

  1. What effects do ALCs have on measurable metrics of student academic achievement? Included in such metrics are measures such as exam scores, course grades, and learning gains on pre/post-test measures, along with data on the acquisition of “21st Century Skills”, which we will define using a framework (OCDE, 2009) which groups “21st Century Skills” into skills pertaining to information, communication, and ethical/social impact.
  2. What effects do ALCs have on student engagement? Specifically, we examine results pertaining to affective, behavioral, and cognitive elements of the idea of “engagement” as well as results that cut across these categories.
  3. What effect do ALCs have on the pedagogical practices and behaviors of instructors? In addition to their effects on students, we are also interested the effects of ALCs on the instructors who use them. Specifically, we are interested in how ALCs affect instructor attitudes toward and implementations of active learning, how ALCs influence faculty adoption of active learning pedagogies, and how the use of ALCs affects instructors’ general and environmental behavior.
  4. What specific design elements of ALCs contribute significantly to the above effects? Finally, we seek to identify the critical elements of ALCs that contribute the most to their effects on student learning and instructor performance, including affordances and elements of design, architecture, and technology integration.

 

Active Learning Classrooms (ALCs)

 

 

The common denominator in the larger cultural effects of ALCs and active learning on students and instructors is the notion of connectedness, a concept we have already introduced in discussions of specific ALC design elements. By being freer to move and have physical and visual contact with each other in a class meeting, students feel more connected to each other and more connected to their instructor. By having an architectural design that facilitates not only movement but choice and agency — for example, through the use of polycentric layouts and reconfigurable furniture — the line between instructor and students is erased, turning the ALC into a vessel in which an authentic community of learners can take form.

 

 

 

 

Reflections on “Are ‘smart’ classrooms the future?” [Johnston]

Are ‘smart’ classrooms the future? — from campustechnology.com by Julie Johnston
Indiana University explores that question by bringing together tech partners and university leaders to share ideas on how to design classrooms that make better use of faculty and student time.

Excerpt:

To achieve these goals, we are investigating smart solutions that will:

  • Untether instructors from the room’s podium, allowing them control from anywhere in the room;
  • Streamline the start of class, including biometric login to the room’s technology, behind-the-scenes routing of course content to room displays, control of lights and automatic attendance taking;
  • Offer whiteboards that can be captured, routed to different displays in the room and saved for future viewing and editing;
  • Provide small-group collaboration displays and the ability to easily route content to and from these displays; and
  • Deliver these features through a simple, user-friendly and reliable room/technology interface.

Activities included collaborative brainstorming focusing on these questions:

  • What else can we do to create the classroom of the future?
  • What current technology exists to solve these problems?
  • What could be developed that doesn’t yet exist?
  • What’s next?

 

 

 

From DSC:
Though many peoples’ — including faculty members’ — eyes gloss over when we start talking about learning spaces and smart classrooms, it’s still an important topic. Personally, I’d rather be learning in an engaging, exciting learning environment that’s outfitted with a variety of tools (physically as well as digitally and virtually-based) that make sense for that community of learners. Also, faculty members have very limited time to get across campus and into the classroom and get things setup…the more things that can be automated in those setup situations the better!

I’ve long posted items re: machine-to-machine communications, voice recognition/voice-enabled interfaces, artificial intelligence, bots, algorithms, a variety of vendors and their products including Amazon’s Alexa / Apple’s Siri / Microsoft’s Cortana / and Google’s Home or Google Assistant, learning spaces, and smart classrooms, as I do think those things are components of our future learning ecosystems.

 

 

 

logo.

Global installed base of smart speakers to surpass 200 million in 2020, says GlobalData

The global installed base for smart speakers will hit 100 million early next year, before surpassing the 200 million mark at some point in 2020, according to GlobalData, a leading data and analytics company.

The company’s latest report: ‘Smart Speakers – Thematic Research’ states that nearly every leading technology company is either already producing a smart speaker or developing one, with Facebook the latest to enter the fray (launching its Portal device this month). The appetite for smart speakers is also not limited by geography, with China in particular emerging as a major marketplace.

Ed Thomas, Principal Analyst for Technology Thematic Research at GlobalData, comments: “It is only four years since Amazon unveiled the Echo, the first wireless speaker to incorporate a voice-activated virtual assistant. Initial reactions were muted but the device, and the Alexa virtual assistant it contained, quickly became a phenomenon, with the level of demand catching even Amazon by surprise.”

Smart speakers give companies like Amazon, Google, Apple, and Alibaba access to a vast amount of highly valuable user data. They also allow users to get comfortable interacting with artificial intelligence (AI) tools in general, and virtual assistants in particular, increasing the likelihood that they will use them in other situations, and they lock customers into a broader ecosystem, making it more likely that they will buy complementary products or access other services, such as online stores.

Thomas continues: “Smart speakers, particularly lower-priced models, are gateway devices, in that they give consumers the opportunity to interact with a virtual assistant like Amazon’s Alexa or Google’s Assistant, in a “safe” environment. For tech companies serious about competing in the virtual assistant sector, a smart speaker is becoming a necessity, hence the recent entry of Apple and Facebook into the market and the expected arrival of Samsung and Microsoft over the next year or so.”

In terms of the competitive landscape for smart speakers, Amazon was the pioneer and is still a dominant force, although its first-mover advantage has been eroded over the last year or so. Its closest challenger is Google, but neither company is present in the fastest-growing geographic market, China. Alibaba is the leading player there, with Xiaomi also performing well.

Thomas concludes: “With big names like Samsung and Microsoft expected to launch smart speakers in the next year or so, the competitive landscape will continue to fluctuate. It is likely that we will see two distinct markets emerge: the cheap, impulse-buy end of the spectrum, used by vendors to boost their ecosystems; and the more expensive, luxury end, where greater focus is placed on sound quality and aesthetics. This is the area of the market at which Apple has aimed the HomePod and early indications are that this is where Samsung’s Galaxy Home will also look to make an impact.”

Information based on GlobalData’s report: Smart Speakers – Thematic Research

 

 

 

 

Gartner: Immersive experiences among top tech trends for 2019 — from campustechnology.com by Dian Schaffhauser

Excerpt:

IT analyst firm Gartner has named its top 10 trends for 2019, and the “immersive user experience” is on the list, alongside blockchain, quantum computing and seven other drivers influencing how we interact with the world. The annual trend list covers breakout tech with broad impact and tech that could reach a tipping point in the near future.

 

 

 

Reflections on “Inside Amazon’s artificial intelligence flywheel” [Levy]

Inside Amazon’s artificial intelligence flywheel — from wired.com by Steven Levy
How deep learning came to power Alexa, Amazon Web Services, and nearly every other division of the company.

Excerpt (emphasis DSC):

Amazon loves to use the word flywheel to describe how various parts of its massive business work as a single perpetual motion machine. It now has a powerful AI flywheel, where machine-learning innovations in one part of the company fuel the efforts of other teams, who in turn can build products or offer services to affect other groups, or even the company at large. Offering its machine-learning platforms to outsiders as a paid service makes the effort itself profitable—and in certain cases scoops up yet more data to level up the technology even more.

It took a lot of six-pagers to transform Amazon from a deep-learning wannabe into a formidable power. The results of this transformation can be seen throughout the company—including in a recommendations system that now runs on a totally new machine-learning infrastructure. Amazon is smarter in suggesting what you should read next, what items you should add to your shopping list, and what movie you might want to watch tonight. And this year Thirumalai started a new job, heading Amazon search, where he intends to use deep learning in every aspect of the service.

“If you asked me seven or eight years ago how big a force Amazon was in AI, I would have said, ‘They aren’t,’” says Pedro Domingos, a top computer science professor at the University of Washington. “But they have really come on aggressively. Now they are becoming a force.”

Maybe the force.

 

 

From DSC:
When will we begin to see more mainstream recommendation engines for learning-based materials? With the demand for people to reinvent themselves, such a next generation learning platform can’t come soon enough!

  • Turning over control to learners to create/enhance their own web-based learner profiles; and allowing people to say who can access their learning profiles.
  • AI-based recommendation engines to help people identify curated, effective digital playlists for what they want to learn about.
  • Voice-driven interfaces.
  • Matching employees to employers.
  • Matching one’s learning preferences (not styles) with the content being presented as one piece of a personalized learning experience.
  • From cradle to grave. Lifelong learning.
  • Multimedia-based, interactive content.
  • Asynchronously and synchronously connecting with others learning about the same content.
  • Online-based tutoring/assistance; remote assistance.
  • Reinvent. Staying relevant. Surviving.
  • Competency-based learning.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

 

 

 

 

 

 

We’re about to embark on a period in American history where career reinvention will be critical, perhaps more so than it’s ever been before. In the next decade, as many as 50 million American workers—a third of the total—will need to change careers, according to McKinsey Global Institute. Automation, in the form of AI (artificial intelligence) and RPA (robotic process automation), is the primary driver. McKinsey observes: “There are few precedents in which societies have successfully retrained such large numbers of people.”

Bill Triant and Ryan Craig

 

 

 

Also relevant/see:

Online education’s expansion continues in higher ed with a focus on tech skills — from educationdive.com by James Paterson

Dive Brief:

  • Online learning continues to expand in higher ed with the addition of several online master’s degrees and a new for-profit college that offers a hybrid of vocational training and liberal arts curriculum online.
  • Inside Higher Ed reported the nonprofit learning provider edX is offering nine master’s degrees through five U.S. universities — the Georgia Institute of Technology, the University of Texas at Austin, Indiana University, Arizona State University and the University of California, San Diego. The programs include cybersecurity, data science, analytics, computer science and marketing, and they cost from around $10,000 to $22,000. Most offer stackable certificates, helping students who change their educational trajectory.
  • Former Harvard University Dean of Social Science Stephen Kosslyn, meanwhile, will open Foundry College in January. The for-profit, two-year program targets adult learners who want to upskill, and it includes training in soft skills such as critical thinking and problem solving. Students will pay about $1,000 per course, though the college is waiving tuition for its first cohort.

 

 

 

Benchmarking Higher Ed AV Staffing Levels — Revisited — from campustechnology.com by Mike Tomei
As AV-equipped classrooms on campus increase in both numbers and complexity, have AV departments staffed up accordingly? A recent survey sheds some light on how AV is managed in higher education.

Excerpt:

I think we can all agree that new AV system installs have a much higher degree of complexity compared to AV systems five or 10 years ago. The obvious culprits are active learning classrooms that employ multiple displays and matrix switching backends, and conferencing systems of varying complexity being installed in big and small rooms all over campus. But even if today’s standard basic classrooms are offering the same presentation functionality as they were five years ago, the backend AV technology running those systems has still increased in complexity. We’re trying to push very high resolution video signals around the room; copyright-protected digital content is coming into play; there are myriad BYOD devices and connectors that need to be supported; and we’re making a strong push to connect our AV devices to the enterprise network for monitoring and troubleshooting. This increase in AV system complexity just adds to the system design, installation and support burdens placed upon an AV department. Without an increase in FTE staff beyond what we’re seeing, there’s just no way that AV support can truly flourish on campuses.

Today we’re reopening the survey to continue to gather data about AV staffing levels, and we’ll periodically tabulate and publish the results for those that participate. Visit www.AV-Survey.com to take the survey. If you would like to request the full 2018 AV staffing survey results, including average AV department budgets, staffing levels by position, breakouts by public/private/community colleges and small/medium/large schools, please send an e-mail to me (mike@tomeiav.com) and to Craig Park from The Sextant Group (cpark@thesextantgroup.com).

 

 

 

Your next doctor’s appointment might be with an AI — from technologyreview.com by Douglas Heaven
A new wave of chatbots are replacing physicians and providing frontline medical advice—but are they as good as the real thing?

Excerpt:

The idea is to make seeking advice about a medical condition as simple as Googling your symptoms, but with many more benefits. Unlike self-diagnosis online, these apps lead you through a clinical-grade triage process—they’ll tell you if your symptoms need urgent attention or if you can treat yourself with bed rest and ibuprofen instead. The tech is built on a grab bag of AI techniques: language processing to allow users to describe their symptoms in a casual way, expert systems to mine huge medical databases, machine learning to string together correlations between symptom and condition.

Babylon Health, a London-based digital-first health-care provider, has a mission statement it likes to share in a big, bold font: to put an accessible and affordable health service in the hands of every person on earth. The best way to do this, says the company’s founder, Ali Parsa, is to stop people from needing to see a doctor.

Not everyone is happy about all this. For a start, there are safety concerns. Parsa compares what Babylon does with your medical data to what Facebook does with your social activities—amassing information, building links, drawing on what it knows about you to prompt some action. Suggesting you make a new friend won’t kill you if it’s a bad recommendation, but the stakes are a lot higher for a medical app.

 

 

Also see:

 

 
© 2024 | Daniel Christian