7 years after Steve Jobs waged war on Flash, it’s officially dying – from finance.yahoo.com by Kif Leswing

Excerpt:

Adobe is killing Flash, the software that millions used in the early 2000s to play web games and watch video in their web browsers.

The company announced the software was “end-of-life” in a blog post on Tuesday. From the blog post:

“Given this progress, and in collaboration with several of our technology partners – including Apple, Facebook, Google, Microsoft and Mozilla – Adobe is planning to end-of-life Flash. Specifically, we will stop updating and distributing the Flash Player at the end of 2020 and encourage content creators to migrate any existing Flash content to these new open formats.”

 

 

The Business of Artificial Intelligence — from hbr.org by Erik Brynjolfsson & Andrew McAfee

Excerpts (emphasis DSC):

The most important general-purpose technology of our era is artificial intelligence, particularly machine learning (ML) — that is, the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it’s given. Within just the past few years machine learning has become far more effective and widely available. We can now build systems that learn how to perform tasks on their own.

Why is this such a big deal? Two reasons. First, we humans know more than we can tell: We can’t explain exactly how we’re able to do a lot of things — from recognizing a face to making a smart move in the ancient Asian strategy game of Go. Prior to ML, this inability to articulate our own knowledge meant that we couldn’t automate many tasks. Now we can.

Second, ML systems are often excellent learners. They can achieve superhuman performance in a wide range of activities, including detecting fraud and diagnosing disease. Excellent digital learners are being deployed across the economy, and their impact will be profound.

In the sphere of business, AI is poised have a transformational impact, on the scale of earlier general-purpose technologies. Although it is already in use in thousands of companies around the world, most big opportunities have not yet been tapped. The effects of AI will be magnified in the coming decade, as manufacturing, retailing, transportation, finance, health care, law, advertising, insurance, entertainment, education, and virtually every other industry transform their core processes and business models to take advantage of machine learning. The bottleneck now is in management, implementation, and business imagination.

The machine learns from examples, rather than being explicitly programmed for a particular outcome.

 

Let’s start by exploring what AI is already doing and how quickly it is improving. The biggest advances have been in two broad areas: perception and cognition. …For instance, Aptonomy and Sanbot, makers respectively of drones and robots, are using improved vision systems to automate much of the work of security guards. 

 

 

Machine learning is driving changes at three levels: tasks and occupations, business processes, and business models. 

 

 

You may have noticed that Facebook and other apps now recognize many of your friends’ faces in posted photos and prompt you to tag them with their names.

 

 

 

Now Everyone Really Can Code Thanks to Apple’s New College Course — from trendintech.com by Amanda Porter

 

ARKit: Augmented Reality on 195 million iPhones and iPads by year end — from blog.mapbox.com by Ceci Alvarez

 

Excerpt:

Apple’s ARKit just made augmented reality (AR) mainstream — and together with the Maps SDK for Unity, will fundamentally change the types of location-based apps that developers can build.

Using ARKit plus Maps SDK for Unity allows you record your bike ride up to Twin Peaks in Strava and project the map of your route on your coffee table. As you plan your next vacation over dinner, you’ll be able to open your Lonely Planet app and have the Big Sur coast hovering in front of you as you browse the different camp sites. Or, when you’re at work appraising a property for flood insurance, you could just tilt up your phone and see the flood plain in front of you, and which parts of the property are susceptible to flooding. Or, when you’re teaching a geology class you project the evolution of Pangea in 3D for students to visualize instead of being limited by 2D images in textbooks.

 

 

 

Inside Peter Jackson’s New Augmented Reality Studio — from cartoonbrew.com by Ian Failes

Excerpt:

At Apple’s recent Worldwide Developers Conference (WWDC) in San Jose, one of the stand-out demos was from Wingnut AR, the augmented reality studio started by director Peter Jackson and his partner Fran Walsh.

On stage, Wingnut AR’s creative director Alasdair Coull demonstrated a tabletop ar experience made using Apple’s upcoming augmented reality developer kit called ARKit and Epic Games’ Unreal Engine 4. The experience blended a real world environment – the tabletop – with digital objects, in this case a sci-fi location complete with attacking spaceships, while being viewed live, on an iPad.

 

 

 

 

Soon your desk will be a computer too — from wired.com by

 

 

More Fun Uses for Augmented Reality & Your iPhone Keep Popping Up — from ar.reality.news by Juliet Gallagher

Excerpt:

Developers are really having a field day with Apple’s ARKit, announced last month. Since it’s release to developers, videos have been appearing all over the Internet of the different ways that developers are getting creative with the ARKit using iPhones and iPads.

Here are a few videos of the cool things that are happening using the ARKit:

 

 

 

 


Addendum on 7/10/17:

Google Lens offers a snapshot of the future for augmented reality and AI — from android authority.com by Adam Sinicki

http://www.androidauthority.com/google-lens-augmented-reality-785836/

 

Google Lens is a tool that effectively brings search into the real world. The idea is simple: you point your phone at something around you that you want more information on and Lens will provide that information.

 

Augmented reality ‘is virtually unlimited’ for Apple — from theaustralian.com.au by Chris Griffith

Excerpt:

Apple’s entry into augmented reality is gathering pace at an amazing rate, says one of its vice-presidents visiting Australia.

In an interview with The Australian yesterday, Apple vice-president of product marketing Greg “Joz” Joswiak said the enthusiasm of Apple’s development community building aug­mented reality (AR) applications had been “unbelievable”.

“They’ve built everything from virtual tape measures (to) ballerinas made out of wood dancing on floors. It’s absolutely incredible what people are doing in so little time.”

 

He said in the commercial space, AR applications would evolve for shopping, furniture placement, education, training and services.

 

 


Also see:

Excerpt:
Imagine being surrounded by a world of ghosts, things that aren’t there unless you look hard enough, and in the right way. With augmented reality technology, that’s possible—and museums are using it to their advantage. With augmented reality, museums are superimposing ther virtual world right over what’s actually in front of you, bringing exhibits and artifacts to life in new ways.

These five spots are great examples of how augmented reality is enhancing the museum experience.

 

 

 


Also see:


 

 

Winner takes all — from by Michael Moe, Luben Pampoulov, Li Jiang, Nick Franco, & Suzee Han

 

We did a lot of things that seemed crazy at the time. Many of those crazy things now have over a billion users, like Google Maps, YouTube, Chrome, and Android.

— Larry Page, CEO, Alphabet

 

 

Excerpt:

An alphabet is a collection of letters that represent language. Alphabet, accordingly, is a collection of companies that represent the many bets Larry Page is making to ensure his platform is built to not only survive, but to thrive in a future defined by accelerating digital disruption. It’s an “Alpha” bet on a diversified platform of assets.

If you look closely, the world’s top technology companies are making similar bets.

 


 

 

Technology in general and the Internet in particular is all about a disproportionate gains to the leader in a category. Accordingly, as technology leaders like Facebook, Alphabet, and Amazon survey the competitive landscape, they have increasingly aimed to develop and acquire emerging technology capabilities across a broad range of complementary categories.

 

 

 

What a future, powerful, global learning platform will look & act like [Christian]


Learning from the Living [Class] Room:
A vision for a global, powerful, next generation learning platform

By Daniel Christian

NOTE: Having recently lost my Senior Instructional Designer position due to a staff reduction program, I am looking to help build such a platform as this. So if you are working on such a platform or know of someone who is, please let me know: danielchristian55@gmail.com.

I want to help people reinvent themselves quickly, efficiently, and cost-effectively — while providing more choice, more control to lifelong learners. This will become critically important as artificial intelligence, robotics, algorithms, and automation continue to impact the workplace.


 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:
A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and “streams of content” that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)

Further details:
While basic courses will be accessible via mobile devices, the optimal learning experience will leverage two or more displays/devices. So while smaller smartphones, laptops, and/or desktop workstations will be used to communicate synchronously or asynchronously with other learners, the larger displays will deliver an excellent learning environment for times when there is:

  • A Subject Matter Expert (SME) giving a talk or making a presentation on any given topic
  • A need to display multiple things going on at once, such as:
  • The SME(s)
  • An application or multiple applications that the SME(s) are using
  • Content/resources that learners are submitting in real-time (think Bluescape, T1V, Prysm, other)
  • The ability to annotate on top of the application(s) and point to things w/in the app(s)
  • Media being used to support the presentation such as pictures, graphics, graphs, videos, simulations, animations, audio, links to other resources, GPS coordinates for an app such as Google Earth, other
  • Other attendees (think Google Hangouts, Skype, Polycom, or other videoconferencing tools)
  • An (optional) representation of the Personal Assistant (such as today’s Alexa, Siri, M, Google Assistant, etc.) that’s being employed via the use of Artificial Intelligence (AI)

This new learning platform will also feature:

  • Voice-based commands to drive the system (via Natural Language Processing (NLP))
  • Language translation (using techs similar to what’s being used in Translate One2One, an earpiece powered by IBM Watson)
  • Speech-to-text capabilities for use w/ chatbots, messaging, inserting discussion board postings
  • Text-to-speech capabilities as an assistive technology and also for everyone to be able to be mobile while listening to what’s been typed
  • Chatbots
    • For learning how to use the system
    • For asking questions of – and addressing any issues with – the organization owning the system (credentials, payments, obtaining technical support, etc.)
    • For asking questions within a course
  • As many profiles as needed per household
  • (Optional) Machine-to-machine-based communications to automatically launch the correct profile when the system is initiated (from one’s smartphone, laptop, workstation, and/or tablet to a receiver for the system)
  • (Optional) Voice recognition to efficiently launch the desired profile
  • (Optional) Facial recognition to efficiently launch the desired profile
  • (Optional) Upon system launch, to immediately return to where the learner previously left off
  • The capability of the webcam to recognize objects and bring up relevant resources for that object
  • A built in RSS feed aggregator – or a similar technology – to enable learners to tap into the relevant “streams of content” that are constantly flowing by them
  • Social media dashboards/portals – providing quick access to multiple sources of content and whereby learners can contribute their own “streams of content”

In the future, new forms of Human Computer Interaction (HCI) such as Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) will be integrated into this new learning environment – providing entirely new means of collaborating with one another.

Likely players:

  • Amazon – personal assistance via Alexa
  • Apple – personal assistance via Siri
  • Google – personal assistance via Google Assistant; language translation
  • Facebook — personal assistance via M
  • Microsoft – personal assistance via Cortana; language translation
  • IBM Watson – cognitive computing; language translation
  • Polycom – videoconferencing
  • Blackboard – videoconferencing, application sharing, chat, interactive whiteboard
  • T1V, Prsym, and/or Bluescape – submitting content to a digital canvas/workspace
  • Samsung, Sharp, LCD, and others – for large displays with integrated microphones, speakers, webcams, etc.
  • Feedly – RSS aggregator
  • _________ – for providing backchannels
  • _________ – for tools to create videocasts and interactive videos
  • _________ – for blogs, wikis, podcasts, journals
  • _________ – for quizzes/assessments
  • _________ – for discussion boards/forums
  • _________ – for creating AR, MR, and/or VR-based content

 

 

 

 

From Apple itself:

 

  • HomePod reinvents music in the home
    San Jose, California — Apple today announced HomePod, a breakthrough wireless speaker for the home that delivers amazing audio quality and uses spatial awareness to sense its location in a room and automatically adjust the audio. Designed to work with an Apple Music subscription for access to over 40 million songs, HomePod provides deep knowledge of personal music preferences and tastes and helps users discover new music.

    As a home assistant, HomePod is a great way to send messages, get updates on news, sports and weather, or control smart home devices by simply asking Siri to turn on the lights, close the shades or activate a scene. When away from home, HomePod is the perfect home hub, providing remote access and home automations through the Home app on iPhone or iPad.

 

 

 

 



Also see:



 

The 8 biggest announcements from Apple WWDC 2017 — from theverge.copm by Natt Garun

Excerpt:

Apple introduced a new ARKit to let developers build augmented reality apps for the iPhone. The kit can help find planes, track motion, and estimate scale and ambient lighting. Popular apps like Pokémon Go will also use ARKit for improved real-time renders.

Rather than requiring external hardware like Microsoft’s HoloLens, Apple seems to be betting on ARKit to provide impressive quality imaging through a device most people already own. We’ll know more on how the quality actually compares when we get to try it out ourselves.

 

 

Everything Apple Announced Today at WWDC — from wired.com by Arielle Pardes

Excerpt:

On Monday, over 5,000 developers packed the San Jose Convention Center to listen to Tim Cook and other Apple execs share the latest innovations out of Cupertino. Over the course of two and a half hours, the company unveiled its most powerful Mac yet, a long-awaited Siri speaker, and tons of new software upgrades across all of the Apple platforms, from your iPhone to your Apple Watch. Missed the keynote speech? Here’s a recap of the nine biggest announcements from WWDC 2017.

 

 

Apple is launching an iOS ‘ARKit’ for augmented reality apps — from theverge.com by Adi Robertson

Excerpt:

Apple has announced a tool it calls ARKit, which will provide advanced augmented reality capabilities on iOS. It’s supposed to allow for “fast and stable motion tracking” that makes objects look like they’re actually being placed in real space, instead of simply hovering over it.

 

 

Apple is finally bringing virtual reality to the Mac – from businessinsider.com by Matt Weinberger

Excerpt:

Apple is finally bringing virtual reality support to its Mac laptops and desktops, bringing the company up to speed with what many see as the next phase of computing.

At Monday’s Apple WWDC event in San Jose, the company announced that with this fall’s MacOS High Sierra update, the Mac will support external graphics hardware — meaning you can plug in a box and greatly increase your machine’s graphical capabilities.

In turn, that external hardware will give the Mac the boost it needs to support virtual reality headsets, which require superior performance to create an immersive experience.

 

 

From DSC:
After seeing the postings below, it made me wonder:

  • Will Starbucks, Apple Stores, etc. be “learning hubs” of the future?
    i.e., places that aren’t really what we think of as a school, college, or university, but where people can go to learn something with others in the same physical space; such locations will likely tie into online or blended-based means of learning as well.

“Today at Apple” bringing new experiences to every Apple Store

Excerpt:

Cupertino, California — Apple today announced plans to launch dozens of new educational sessions next month in all 495 Apple stores ranging in topics from photo and video to music, coding, art and design and more. The hands-on sessions, collectively called “Today at Apple,” will be led by highly-trained team members, and in select cities world-class artists, photographers and musicians, teaching sessions from basics and how-to lessons to professional-level programs.

Apple will also offer special programs for families and educators. Teachers can come together for Teacher Tuesday to learn new ways to incorporate technology into their classrooms, or aspiring coders of all ages can learn how to code in Swift, Apple’s programming language for iOS and Mac apps. Families can join weekend Kids Hour sessions ranging from music making to coding with robots. Small business owners can engage with global and local entrepreneurs in the new Business Circuits program.

We’re creating a modern-day town square, where everyone is welcome in a space where the best of Apple comes together to connect with one another, discover a new passion, or take their skill to the next level.

Apple wants kids to hang out at Apple stores — from qz.com by Mike Murphy

Excerpt:

If you’ve just gotten out of school for the day and want to hang out with your friends before you head home, where would you go? In the US, there’s a near-infinite selection of chain restaurants, coffee shops, diners, bookstores, movie theaters, and comic book stores to choose from. But Angela Ahrendts, Apple’s head of retail, wants the answer to be an Apple store.

Apple is in the process of revamping the look and feel of its retail outlets across the world, and to highlight some of the recent changes (including rebranding the “Genius Bar” to the “Genius Grove” and adding foliage everywhere), Ahrendts gave an interview to CBS This Morning, this morning. Ahrendts told CBS that she will see her work as a success when Generation Z, the catchall term for the generation behind the equally amorphous Millennials, decides of their own volition to hang out at Apple stores. As CBS reported…

 

The Dark Secret at the Heart of AI — from technologyreview.com by Will Knight
No one really knows how the most advanced algorithms do what they do. That could be a problem.

Excerpt:

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

 

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

 


This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

 

 

 

 

[On 4/3/17] the World’s First Live Hologram Phone Call was made between Seoul and New Jersey on a 5G Network — from patentlyapple.com

Excerpt:

[On 4/3/17] a little history was made. Verizon and Korean Telecom (KT) unveiled the world’s first live hologram international call service via the companies’ trial 5G networks established in Seoul and in New Jersey, respectively. Our cover graphic shows Verizon CEO Lowell McAdam (left) and KT CEO Hwang Chang-gyu demonstrate a hologram video call on a tablet PC at the KT headquarters in central Seoul Monday.

In the demonstration, a KT employee held a meeting with a Verizon employee in New Jersey who appeared as a hologram image on a monitor in the KT headquarters building.

 

With today’s revelations from South Korea, it’s easy to imagine that we’ll see Apple’s FaceTime offer a holographic experience in the not-too-distant future with added AR experiences as Apple’s CEO has conveyed.

 

 

 

 

Samsung’s personal assistant Bixby will take on Amazon Alexa, Apple Siri — from theaustralian.com.au by Chris Griffith

Excerpt:

Samsung has published details of its Bixby personal assistant, which will debut on its Galaxy S8 smartphone in New York next week.

Bixby will go head-to-head with Google Assistant, Microsoft Cortana, Amazon Echo and Apple Siri, in a battle to lure you into their artificial intelligence world.

In future, the personal assistant that you like may not only influence which phone you buy, also the home automation system that you adopt.

This is because these personal assistants cross over into home use, which is why Samsung would bother with one of its own.

Given that the S8 will run Android Nougat, which includes Google Assistant, users will have two personal assistants on their phone, unless somehow one is disabled.

 

 

There are a lot of red flags with Samsung’s AI assistant in the new Galaxy S8 — from businessinsider.com by Steve Kovach

Excerpt:

There’s Siri. And Alexa. And Google Assistant. And Cortana. Now add another one of those digital assistants to the mix: Bixby, the new helper that lives inside Samsung’s latest phone, the Galaxy S8. But out of all the assistants that have launched so far, Bixby is the most curious and the most limited.

Samsung’s goal with Bixby was to create an assistant that can mimic all the functions you’re used to performing by tapping on your screen through voice commands. The theory is that phones are too hard to manage, so simply letting users tell their phone what they want to happen will make things a lot easier.

 

 

Samsung Galaxy S8: Hands on with the world’s most ambitious phone — from telegraph.co.uk by James Titcomb

Excerpt:

The S8 will also feature Bixby, Samsung’s new intelligent assistant. The company says Bixby is a bigger deal than Siri or Google Assistant – as well as simply asking for the weather, it will be deeply integrated with the phone’s everyday functions such as taking photos and sending them to people. Samsung has put a dedicated Bixby button on the S8 on the left hand side, but I wasn’t able to try it out because it won’t launch in the UK until later this year.

 

 

Samsung Galaxy S8 launch: Samsung reveals its long-awaited iPhone killer — from telegraph.co.uk by James Titcomb

 

 

 


Also see:


 

Recent years have brought some rapid development in the area of artificially intelligent personal assistants. Future iterations of the technology could fully revamp the way we interact with our devices.

 

 

 

21 bot experts make their predictions for 2017 — from venturebeat.com by Adelyn Zhou

Excerpt:

2016 was a huge year for bots, with major platforms like Facebook launching bots for Messenger, and Amazon and Google heavily pushing their digital assistants. Looking forward to 2017, we asked 21 bot experts, entrepreneurs, and executives to share their predictions for how bots will continue to evolve in the coming year.

From Jordi Torras, founder and CEO, Inbenta:
“Chatbots will get increasingly smarter, thanks to the adoption of sophisticated AI algorithms and machine learning. But also they will specialize more in specific tasks, like online purchases, customer support, or online advice. First attempts of chatbot interoperability will start to appear, with generalist chatbots, like Siri or Alexa, connecting to specialized enterprise chatbots to accomplish specific tasks. Functions traditionally performed by search engines will be increasingly performed by chatbots.”

 

 

 

 

 


From DSC:
For those of us working within higher education, chatbots need to be on our radars. Here are 2 slides from my NGLS 2017 presentation.

 

 

 

 
© 2024 | Daniel Christian