Why Facebook’s banned “Research” app was so invasive — from wired.com by Louise Matsakislo

Excerpts:

Facebook reportedly paid users between the ages of 13 and 35 $20 a month to download the app through beta-testing companies like Applause, BetaBound, and uTest.


Apple typically doesn’t allow app developers to go around the App Store, but its enterprise program is one exception. It’s what allows companies to create custom apps not meant to be downloaded publicly, like an iPad app for signing guests into a corporate office. But Facebook used this program for a consumer research app, which Apple says violates its rules. “Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple,” a spokesperson said in a statement. “Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data.” Facebook didn’t respond to a request for comment.

Facebook needed to bypass Apple’s usual policies because its Research app is particularly invasive. First, it requires users to install what is known as a “root certificate.” This lets Facebook look at much of your browsing history and other network data, even if it’s encrypted. The certificate is like a shape-shifting passport—with it, Facebook can pretend to be almost anyone it wants.

To use a nondigital analogy, Facebook not only intercepted every letter participants sent and received, it also had the ability to open and read them. All for $20 a month!

Facebook’s latest privacy scandal is a good reminder to be wary of mobile apps that aren’t available for download in official app stores. It’s easy to overlook how much of your information might be collected, or to accidentally install a malicious version of Fortnite, for instance. VPNs can be great privacy tools, but many free ones sell their users’ data in order to make money. Before downloading anything, especially an app that promises to earn you some extra cash, it’s always worth taking another look at the risks involved.

 

The information below is per Laura Kelley (w/ Page 1 Solutions)


As you know, Apple has shut down Facebook’s ability to distribute internal iOS apps. The shutdown comes following news that Facebook has been using Apple’s program for internal app distribution to track teenage customers for “research.”

Dan Goldstein is the president and owner of Page 1 Solutions, a full-service digital marketing agency. He manages the needs of clients along with the need to ensure protection of their consumers, which has become one of the top concerns from clients over the last year. Goldstein is also a former attorney so he balances the marketing side with the legal side when it comes to protection for both companies and their consumers. He says while this is another blow for Facebook, it speaks volumes for Apple and its concern for consumers,

“Facebook continues to demonstrate that it does not value user privacy. The most disturbing thing about this news is that Facebook knew that its app violated Apples terms of service and continued to distribute the app to consumers after it was banned from the App Store. This shows, once again, that Facebook doesn’t value user privacy and goes to great lengths to collect private behavioral data to give it a competitive advantage.The FTC is already investigating Facebook’s privacy policies and practices.As Facebook’s efforts to collect and use private data continue to be exposed, it risks losing market share and may prompt additional governmental investigations and regulation,” Goldstein says.

“One positive that comes out of this story is that Apple seems to be taking a harder line on protecting user privacy than other tech companies. Apple has been making noises about protecting user privacy for several months. This action indicates that it is attempting to follow through on its promises,” Goldstein says.

 

 

A landmark ruling gives new power to sue tech giants for privacy harms — from fastcompany.com by Katharine Schwab

Excerpt:

A unanimous ruling by the Illinois Supreme Court says that companies that improperly gather people’s data can be sued for damages even without proof of concrete injuries, opening the door to legal challenges that Facebook, Google, and other businesses have resisted.

 

 

Facebook’s ’10 year’ challenge is just a harmless meme — right? — from wired.com by Kate O’Neill

Excerpts:

But the technology raises major privacy concerns; the police could use the technology not only to track people who are suspected of having committed crimes, but also people who are not committing crimes, such as protesters and others whom the police deem a nuisance.

It’s tough to overstate the fullness of how technology stands to impact humanity. The opportunity exists for us to make it better, but to do that we also must recognize some of the ways in which it can get worse. Once we understand the issues, it’s up to all of us to weigh in.

 

Google, Facebook, and the Legal Mess Over Face Scanning — finance.yahoo.com by John Jeff Roberts

Excerpt:

When must companies receive permission to use biometric data like your fingerprints or your face? The question is a hot topic in Illinois where a controversial law has ensnared tech giants Facebook and Google, potentially exposing them to billions in dollars in liability over their facial recognition tools.

The lack of specific guidance from the Supreme Court has since produced ongoing confusion over what type of privacy violations can let people seek financial damages.

 

Also see:

 

 

From DSC:
The legal and legislative areas need to close the gap between emerging technologies and the law.

What questions should we be asking about the skillsets that our current and future legislative representatives need? Do we need some of our representatives to be highly knowledgeable, technically speaking? 

What programs and other types of resources should we be offering our representatives to get up to speed on emerging technologies? Which blogs, websites, journals, e-newsletters, listservs, and/or other communication vehicles and/or resources should they have access to?

Along these lines, what about our judges? Can we offer them some of these resources as well? 

What changes do our law schools need to make to address this?

 

 

 

 
 

On one hand XR-related technologies
show some promise and possibilities…

 

The AR Cloud will infuse meaning into every object in the real world — from venturebeat.com by Amir Bozorgzadeh

Excerpt:

Indeed, if you haven’t yet heard of the “AR Cloud”, it’s time to take serious notice. The term was coined by Ori Inbar, an AR entrepreneur and investor who founded AWE. It is, in his words, “a persistent 3D digital copy of the real world to enable sharing of AR experiences across multiple users and devices.”

 

Augmented reality invades the conference room — from zdnet.com by Ross Rubin
Spatial extends the core functionality of video and screen sharing apps to a new frontier.

 

 

The 5 most innovative augmented reality products of 2018 — from next.reality.news by Adario Strange

 

 

Augmented, virtual reality major opens at Shenandoah U. next fall — from edscoop.com by by Betsy Foresman

Excerpt:

“It’s not about how virtual reality functions. It’s about, ‘How does history function in virtual reality? How does biology function in virtual reality? How does psychology function with these new tools?’” he said.

The school hopes to prepare student for careers in a field with a market size projected to grow to $209.2 billion by 2022, according to Statista. Still at its advent, Whelan compared VR technology to the introduction of the personal computer.

 

VR is leading us into the next generation of sports media — from venturebeat.com by Mateusz Przepiorkowski

 

 

Accredited surgery instruction now available in VR — from zdnet.com by Greg Nichols
The medical establishment has embraced VR training as a cost-effective, immersive alternative to classroom time.

 

Toyota is using Microsoft’s HoloLens to build cars faster — from cnn.comby Rachel Metz

From DSC:
But even in that posting the message is mixed…some pros…some cons. Some things going well for XR-related techs…but for other things, things are not going very well.

 

 

…but on the other hand,
some things don’t look so good…

 

Is the Current Generation of VR Already Dead? — from medium.com by Andreas Goeldi

Excerpt:

Four years later, things are starting to look decidedly bleak. Yes, there are about 5 million Gear VR units and 3 million Sony Playstation VR headsets in market, plus probably a few hundred thousand higher-end Oculus and HTC Vive systems. Yes, VR is still being demonstrated at countless conferences and events, and big corporations that want to seem innovative love to invest in a VR app or two. Yes, Facebook just cracked an important low-end price point with its $200 Oculus Go headset, theoretically making VR affordable for mainstream consumers. Plus, there’s even more hype about Augmented Reality, which in a way could be a gateway drug to VR.

But it’s hard to ignore a growing feeling that VR is not developing as the industry hoped it would. So is that it again, we’ve seen this movie before, let’s all wrap it up and wait for the next wave of VR to come along about five years from now?

There are a few signs that are really worrying…

 

 

From DSC:
My take is that it’s too early to tell. We need to give things more time.

 

 

 

Why should anyone believe Facebook anymore? — from wired.com by Fred Vogelstein

Excerpt:

Just since the end of September, Facebook announced the biggest security breach in its history, affecting more than 30 million accounts. Meanwhile, investigations in November revealed that, among other things, the company had hired a Washington firm to spread its own brand of misinformation on other platforms, including borderline anti-Semitic stories about financier George Soros. Just two weeks ago, a cache of internal emails dating back to 2012 revealed that at times Facebook thought a lot more about how to make money off users’ data than about how to protect it.

Now, according to a New York Times investigation into Facebook’s data practices published Tuesday, long after Facebook said it had taken steps to protect user data from the kinds of leakages that made Cambridge Analytica possible, the company continued to sustain special, undisclosed data-sharing arrangements with more than 150 companies—some into this year. Unlike with Cambridge Analytica, the Times says, Facebook provided access to its users’ data knowingly and on a greater scale.

 

What has enabled them to deliver these apologies, year after year, was that these sycophantic monologues were always true enough to be believable. The Times’ story calls into question every one of those apologies—especially the ones issued this year.

There’s a simple takeaway from all this, and it’s not a pretty one: Facebook is either a mendacious, arrogant corporation in the mold of a 1980s-style Wall Street firm, or it is a company in much more disarray than it has been letting on. 

It’s hard to process this without finally realizing what it is that’s made us so angry with Silicon Valley, and Facebook in particular, in 2018: We feel lied to, like these companies are playing us, their users, for chumps, and they’re also laughing at us for being so naive.

 

 

Also related/see:

‘We’ve hit an inflection point’: Big Tech failed big-time in 2018 — from finance.yahoo.com by JP Mangalindan

Excerpt:

2018 will be remembered as the year the public’s big soft-hearted love affair with Big Tech came to a screeching halt.

For years, lawmakers and the public let massive companies like Facebook, Google, and Amazon run largely unchecked. Billions of people handed them their data — photos, locations, and other status-rich updates — with little scrutiny or question. Then came revelations around several high-profile data breaches from Facebook: a back-to-back series of rude awakenings that taught casual web-surfing, smartphone-toting citizens that uploading their data into the digital ether could have consequences. Google reignited the conversation around sexual harassment, spurring thousands of employees to walk out, while Facebook reminded some corners of the U.S. that racial bias, even in supposedly egalitarian Silicon Valley, remained alive and well. And Amazon courted well over 200 U.S. cities in its gaudy and protracted search for a second headquarters.

“I think 2018 was the year that people really called tech companies on the carpet about the way that they’ve been behaving conducting their business,” explained Susan Etlinger, an analyst at the San Francisco-based Altimeter Group. “We’ve hit an inflection point where people no longer feel comfortable with the ways businesses are conducting themselves. At the same time, we’re also at a point, historically, where there’s just so much more willingness to call out businesses and institutions on bigotry, racism, sexism and other kinds of bias.”

 

The public’s love affair with Facebook hit its first major rough patch in 2016 when Russian trolls attempted to meddle with the 2016 U.S. presidential election using the social media platform. But it was the Cambridge Analytica controversy that may go down in internet history as the start of a series of back-to-back, bruising controversies for the social network, which for years, served as the Silicon Valley poster child of the nouveau American Dream. 

 

 

 

Google Glass wasn’t a failure. It raised crucial concerns. — from wired.com by Rose Eveleth

Excerpts:

So when Google ultimately retired Glass, it was in reaction to an important act of line drawing. It was an admission of defeat not by design, but by culture.

These kinds of skirmishes on the front lines of surveillance might seem inconsequential — but they can not only change the behavior of tech giants like Google, they can also change how we’re protected under the law. Each time we invite another device into our lives, we open up a legal conversation over how that device’s capabilities change our right to privacy. To understand why, we have to get wonky for a bit, but it’s worth it, I promise.

 

But where many people see Google Glass as a cautionary tale about tech adoption failure, I see a wild success. Not for Google of course, but for the rest of us. Google Glass is a story about human beings setting boundaries and pushing back against surveillance…

 

IN THE UNITED States, the laws that dictate when you can and cannot record someone have a several layers. But most of these laws were written when smartphones and digital home assistants weren’t even a glimmer in Google’s eye. As a result, they are mostly concerned with issues of government surveillance, not individuals surveilling each other or companies surveilling their customers. Which means that as cameras and microphones creep further into our everyday lives, there are more and more legal gray zones.

 

From DSC:
We need to be aware of the emerging technologies around us. Just because we can, doesn’t mean we should. People need to be aware of — and involved with — which emerging technologies get rolled out (or not) and/or which features are beneficial to roll out (or not).

One of the things that’s beginning to alarm me these days is how the United States has turned over the keys to the Maserati — i.e., think an expensive, powerful thing — to youth who lack the life experiences to know how to handle such power and, often, the proper respect for such power. Many of these youthful members of our society don’t own the responsibility for the positive and negative influences and impacts that such powerful technologies can have.

If you owned the car below, would you turn the keys of this ~$137,000+ car over to your 16-25 year old? Yet that’s what America has been doing for years. And, in some areas, we’re now paying the price.

 

If you owned this $137,000+ car, would you turn the keys of it over to your 16-25 year old?!

 

The corporate world continues to discard the hard-earned experience that age brings…as they shove older people out of the workforce. (I hesitate to use the word wisdom…but in some cases, that’s also relevant/involved here.) Then we, as a society, sit back and wonder how did we get to this place?

Even technologists and programmers in their 20’s and 30’s are beginning to step back and ask…WHY did we develop this application or that feature? Was it — is it — good for society? Is it beneficial? Or should it be tabled or revised into something else?

Below is but one example — though I don’t mean to pick on Microsoft, as they likely have more older workers than the Facebooks, Googles, or Amazons of the world. I fully realize that all of these companies have some older employees. But the youth-oriented culture in American today has almost become an obsession — and not just in the tech world. Turn on the TV, check out the new releases on Netflix, go see a movie in a theater, listen to the radio, cast but a glance at the magazines in the check out lines, etc. and you’ll instantly know what I mean.

In the workplace, there appears to be a bias against older employees as being less innovative or tech-savvy — such a perspective is often completely incorrect. Go check out LinkedIn for items re: age discrimination…it’s a very real thing. But many of us over the age of 30 know this to be true if we’ve lost a job in the last decade or two and have tried to get a job that involves technology.

Microsoft argues facial-recognition tech could violate your rights — from finance.yahoo.com by Rob Pegoraro

Excerpt (emphasis DSC):

On Thursday, the American Civil Liberties Union provided a good reason for us to think carefully about the evolution of facial-recognition technology. In a study, the group used Amazon’s (AMZN) Rekognition service to compare portraits of members of Congress to 25,000 arrest mugshots. The result: 28 members were mistakenly matched with 28 suspects.

The ACLU isn’t the only group raising the alarm about the technology. Earlier this month, Microsoft (MSFT) president Brad Smith posted an unusual plea on the company’s blog asking that the development of facial-recognition systems not be left up to tech companies.

Saying that the tech “raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith called for “a government initiative to regulate the proper use of facial recognition technology, informed first by a bipartisan and expert commission.”

But we may not get new laws anytime soon.

 

just because we can does not mean we should

 

Just because we can…

 

just because we can does not mean we should

 

Addendum on 12/27/18: — also related/see:

‘We’ve hit an inflection point’: Big Tech failed big-time in 2018 — from finance.yahoo.com by JP Mangalindan

Excerpt (emphasis DSC):

2018 will be remembered as the year the public’s big soft-hearted love affair with Big Tech came to a screeching halt.

For years, lawmakers and the public let massive companies like Facebook, Google, and Amazon run largely unchecked. Billions of people handed them their data — photos, locations, and other status-rich updates — with little scrutiny or question. Then came revelations around several high-profile data breaches from Facebook: a back-to-back series of rude awakenings that taught casual web-surfing, smartphone-toting citizens that uploading their data into the digital ether could have consequences. Google reignited the conversation around sexual harassment, spurring thousands of employees to walk out, while Facebook reminded some corners of the U.S. that racial bias, even in supposedly egalitarian Silicon Valley, remained alive and well. And Amazon courted well over 200 U.S. cities in its gaudy and protracted search for a second headquarters.

“I think 2018 was the year that people really called tech companies on the carpet about the way that they’ve been behaving conducting their business,” explained Susan Etlinger, an analyst at the San Francisco-based Altimeter Group. “We’ve hit an inflection point where people no longer feel comfortable with the ways businesses are conducting themselves. At the same time, we’re also at a point, historically, where there’s just so much more willingness to call out businesses and institutions on bigotry, racism, sexism and other kinds of bias.”

 

The public’s love affair with Facebook hit its first major rough patch in 2016 when Russian trolls attempted to meddle with the 2016 U.S. presidential election using the social media platform. But it was the Cambridge Analytica controversy that may go down in internet history as the start of a series of back-to-back, bruising controversies for the social network, which for years, served as the Silicon Valley poster child of the nouveau American Dream. 

 

 

Hey bot, what’s next in line in chatbot technology? — from blog.engati.com by Imtiaz Bellary

Excerpts:

Scenario 1: Are bots the new apps?
Scenario 2: Bot conversations that make sense
Scenario 3: Can bots increase employee throughput?
Scenario 4: Let voice take over!

 

Voice as an input medium is catching up with an increasing number of folks adopting Amazon Echo and other digital assistants for their daily chores. Can we expect bots to gauge your mood and provide personalised experience as compared to a standard response? In regulated scenarios, voice acts as an authentication mechanism for the bot to pursue actions. Voice as an input adds sophistication and ease to do tasks quickly, thereby increasing user experience.

 

 

AI Now Report 2018 | December 2018  — from ainowinstitute.org

Meredith Whittaker , AI Now Institute, New York University, Google Open Research
Kate Crawford , AI Now Institute, New York University, Microsoft Research
Roel Dobbe , AI Now Institute, New York University
Genevieve Fried , AI Now Institute, New York University
Elizabeth Kaziunas , AI Now Institute, New York University
Varoon Mathur , AI Now Institute, New York University
Sarah Myers West , AI Now Institute, New York University
Rashida Richardson , AI Now Institute, New York University
Jason Schultz , AI Now Institute, New York University School of Law
Oscar Schwartz , AI Now Institute, New York University

With research assistance from Alex Campolo and Gretchen Krueger (AI Now Institute, New York University)

Excerpt (emphasis DSC):

Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem, and provides 10 practical recommendations that can help create accountability frameworks capable of governing these powerful technologies.

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.
  2. Facial recognition and affect recognition need stringent regulation to protect the public interest.
  3. The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems.
  4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector.
  5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers.
  6.  Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services.
  7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces.
  8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.”
  9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues.
  10. University AI programs should expand beyond computer science and engineering disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to become a technical discipline. With the increasing application of AI systems to social domains, it needs to expand its disciplinary orientation. That means centering forms of expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address social implications cannot stay solely within computer science and engineering departments, where faculty and students are not trained to research the social world. Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations.

 

Also see:

After a Year of Tech Scandals, Our 10 Recommendations for AI — from medium.com by the AI Now Institute
Let’s begin with better regulation, protecting workers, and applying “truth in advertising” rules to AI

 

Also see:

Excerpt:

As we discussed, this technology brings important and even exciting societal benefits but also the potential for abuse. We noted the need for broader study and discussion of these issues. In the ensuing months, we’ve been pursuing these issues further, talking with technologists, companies, civil society groups, academics and public officials around the world. We’ve learned more and tested new ideas. Based on this work, we believe it’s important to move beyond study and discussion. The time for action has arrived.

We believe it’s important for governments in 2019 to start adopting laws to regulate this technology. The facial recognition genie, so to speak, is just emerging from the bottle. Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up.

In particular, we don’t believe that the world will be best served by a commercial race to the bottom, with tech companies forced to choose between social responsibility and market success. We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.

 

From DSC:
This is a major heads up to the American Bar Association (ABA), law schools, governments, legislatures around the country, the courts, the corporate world, as well as for colleges, universities, and community colleges. The pace of emerging technologies is much faster than society’s ability to deal with them! 

The ABA and law schools need to majorly pick up their pace — for the benefit of all within our society.

 

 

 

What Blockchain Could Mean for the Future of Journalism — from mediablog.prnewswire.com by Julian Dossett

 

Excerpt:

The digital era has utterly changed the way readers interact with the news.

Traditional news outlets struggle to remain relevant as the media sector’s influence is refocused online.

Journalism in the U.S. faces a number of challenges that blockchain technology has the potential to address and possibly solve — if the technology actually can achieve what it promises.

2018 itself has seen journalism move into uncharted waters as the industry comes up against issues, stemming from the continued digital migration of news organizations.

 

Because blockchain functions as a platform to facilitate peer-to-peer transactions, there are a few news organizations that believe blockchain technology finally will enable micropayments to be widely adopted in the U.S.

 

 

 

 

logo.

Global installed base of smart speakers to surpass 200 million in 2020, says GlobalData

The global installed base for smart speakers will hit 100 million early next year, before surpassing the 200 million mark at some point in 2020, according to GlobalData, a leading data and analytics company.

The company’s latest report: ‘Smart Speakers – Thematic Research’ states that nearly every leading technology company is either already producing a smart speaker or developing one, with Facebook the latest to enter the fray (launching its Portal device this month). The appetite for smart speakers is also not limited by geography, with China in particular emerging as a major marketplace.

Ed Thomas, Principal Analyst for Technology Thematic Research at GlobalData, comments: “It is only four years since Amazon unveiled the Echo, the first wireless speaker to incorporate a voice-activated virtual assistant. Initial reactions were muted but the device, and the Alexa virtual assistant it contained, quickly became a phenomenon, with the level of demand catching even Amazon by surprise.”

Smart speakers give companies like Amazon, Google, Apple, and Alibaba access to a vast amount of highly valuable user data. They also allow users to get comfortable interacting with artificial intelligence (AI) tools in general, and virtual assistants in particular, increasing the likelihood that they will use them in other situations, and they lock customers into a broader ecosystem, making it more likely that they will buy complementary products or access other services, such as online stores.

Thomas continues: “Smart speakers, particularly lower-priced models, are gateway devices, in that they give consumers the opportunity to interact with a virtual assistant like Amazon’s Alexa or Google’s Assistant, in a “safe” environment. For tech companies serious about competing in the virtual assistant sector, a smart speaker is becoming a necessity, hence the recent entry of Apple and Facebook into the market and the expected arrival of Samsung and Microsoft over the next year or so.”

In terms of the competitive landscape for smart speakers, Amazon was the pioneer and is still a dominant force, although its first-mover advantage has been eroded over the last year or so. Its closest challenger is Google, but neither company is present in the fastest-growing geographic market, China. Alibaba is the leading player there, with Xiaomi also performing well.

Thomas concludes: “With big names like Samsung and Microsoft expected to launch smart speakers in the next year or so, the competitive landscape will continue to fluctuate. It is likely that we will see two distinct markets emerge: the cheap, impulse-buy end of the spectrum, used by vendors to boost their ecosystems; and the more expensive, luxury end, where greater focus is placed on sound quality and aesthetics. This is the area of the market at which Apple has aimed the HomePod and early indications are that this is where Samsung’s Galaxy Home will also look to make an impact.”

Information based on GlobalData’s report: Smart Speakers – Thematic Research

 

 

 

 

The 11 coolest announcements Google made at its biggest product event of the year — from businessinsider.com by Dave Smith

Excerpt:

On Tuesday (10/9), Google invited journalists to New York City to debut its newest smartphone, the Pixel 3, among several other hardware goodies.

Google made dozens of announcements in its 75-minute event, but a handful of “wow” moments stole the show.

Here are the 11 coolest announcements Google made at its big hardware event…

 

 

 

Google’s Pixel 3 event in 6 minutes — from businessinsider.com

 

 

Google unveiled its new Home Hub but it might be ‘late to the market’ — from barrons.com by Emily Bary

Excerpt:

Alphabet ’s Google has sold millions of voice-enabled speakers, but it inched toward the future at a Tuesday launch event when it introduced the Home Hub smart screen.

Google isn’t the first company to roll out a screen-enabled home device with voice-assistant technology— Amazon.com released its Echo Show in July 2017. Meanwhile, Lenovo has gotten good reviews for its Smart Display, and Facebook introduced the Portal on Monday.

For the most part, though, consumers have stuck to voice-only devices, and it will be up to Google and its rivals to convince them that an added screen is worth paying for. They’ll also have to reassure consumers that they can trust big tech to maintain their privacy, an admittedly harder task these days after recent security issues at Google and Facebook.

 

Amazon was right to realize early on that consumers aren’t always comfortable buying items they can’t even see pictures of, and that it’s hard to remember directions you’ve heard but not seen.

 

 

Google has announced its first smart speaker with a screen — from businessinsider.com by Brandt Ranj

  • Google has announced the Google Hub, its first smart home speaker with a screen. It’s available for pre-order at Best Buy, Target, and Walmart for $149. The Hub will be released on October 22.
  • The Hub has a 7″ touchscreen and sits on a base with a built-in speaker and microphones, which you can use to play music, watch videos, get directions, and control smart home accessories with your voice.
  • Its biggest advantage is its ability to hook into Google’s first-party services, like YouTube and Google Maps, which none of its competitors can use.
  • If you’re an Android or Chromecast user, trust Google more than Amazon, or want a smaller smart home speaker with a screen, the Google Hub is now your best bet.

 

 

Google is shutting down Google+ for consumers following security lapse — from theverge.com by Ashley Carman

Excerpt:

Google is going to shut down the consumer version of Google+ over the next 10 months, the company writes in a blog post today. The decision follows the revelation of a previously undisclosed security flaw that exposed users’ profile data that was remedied in March 2018.

 

 

Google shutters Google+ after users’ personal data exposed — from cbsnews.com

 

 

Google+ is shutting down, and the site’s few loyal users are mourning – from cnbc.com by Jillian D’Onfro

  • Even though Google had diverted resources away from Google+, there’s a small loyal base of users devastated to see it go away.
  • One fan said the company is using its recently revealed security breach as an excuse to shutter the site.
  • Google said it will be closing Google+ in the coming months.

 

 

 
For museums, augmented reality is the next frontier — from wired.com by Arielle Pardes

Excerpt:

Mae Jemison, the first woman of color to go into space, stood in the center of the room and prepared to become digital. Around her, 106 cameras captured her image in 3-D, which would later render her as a life-sized hologram when viewed through a HoloLens headset.

Jemison was recording what would become the introduction for a new exhibit at the Intrepid Sea, Air, and Space Museum, which opens tomorrow as part of the Smithsonian’s annual Museum Day. In the exhibit, visitors will wear HoloLens headsets and watch Jemison materialize before their eyes, taking them on a tour of the Space Shuttle Enterprise—and through space history. They’re invited to explore artifacts both physical (like the Enterprise) and digital (like a galaxy of AR stars) while Jemison introduces women throughout history who have made important contributions to space exploration.

Interactive museum exhibits like this are becoming more common as augmented reality tech becomes cheaper, lighter, and easier to create.

 

 

Oculus will livestream it’s 5th Connect Conference on Oculus venues — from vrscout.com by Kyle Melnick

Excerpt (emphasis DSC):

Using either an Oculus Go standalone device or a mobile Gear VR headset, users will be able to login to the Oculus Venues app and join other users for an immersive live stream of various developer keynotes and adrenaline-pumping esports competitions.

 

From DSC:
What are the ramifications of this for the future of webinars, teaching and learning, online learning, MOOCs and more…?

 

 

 

10 new AR features in iOS 12 for iPhone & iPad — from mobile-ar.reality.news by Justin Meyers

Excerpt:

Apple’s iOS 12 has finally landed. The big update appeared for everyone on Monday, Sept. 17, and hiding within are some pretty amazing augmented reality upgrades for iPhones, iPads, and iPod touches. We’ve been playing with them ever since the iOS 12 beta launched in June, and here are the things we learned that you’ll want to know about.

For now, here’s everything AR-related that Apple has included in iOS 12. There are some new features aimed to please AR fanatics as well as hook those new to AR into finally getting with the program. But all of the new AR features rely on ARKit 2.0, the latest version of Apple’s augmented reality framework for iOS.

 

 

Berkeley College Faculty Test VR for Learning— from campustechnology.com by Dian Schaffhauser

Excerpt:

In a pilot program at Berkeley College, members of a Virtual Reality Faculty Interest Group tested the use of virtual reality to immerse students in a variety of learning experiences. During winter 2018, seven different instructors in nearly as many disciplines used inexpensive Google Cardboard headsets along with apps on smartphones to virtually place students in North Korea, a taxicab and other environments as part of their classwork.

Participants used free mobile applications such as Within, the New York Times VR, Discovery VR, Jaunt VR and YouTube VR. Their courses included critical writing, international business, business essentials, medical terminology, international banking, public speaking and crisis management.

 

 

 

 
© 2024 | Daniel Christian