1,600 donated Echo Dots say hello to Arizona State engineering students — from by Corinne Lestch; with thanks to eduwire for their posting on this
ASU says the voice-controlled Amazon devices aren’t just for campus questions — they’re preparing students for future technologies, too.

Excerpt:

ASU students have set up Echo Dots, a hands-free, voice-controlled device the size of a hockey puck, in engineering residence halls.

Students can ask questions about topics ranging from the weather to campus sporting events to library hours to exam schedules.

“We’re continuing to add content as we’re learning what students want to learn about,” Rome said. “So there’s this feedback loop of what students want, and we monitor what questions are being asked.”

Amazon donated about 1,600 Dots to engineering students at ASU, so the technology belongs to the students, not to the school. The students can choose to use them or not, said John German, director for media relations and research communications.

“We have the largest engineering school in the country, and one of the things we’re trying to do is teach students the most advanced technology, the kinds of technology that are going to make them competitive in the job market when they get their degrees,” German said. “And voice technology is a field that’s growing. It’s going to play a role in the future.”

 

 

“Voice is becoming the new mobile of 10 years ago,” Rome said. “We’ve decided to be an early adopter of this technology.”

“There’s going to come some day when students can interact [with ASU’s student portal] via webpage or microphone on their mobile phones,” he said. “We think that’s inevitable.”

 

 

 

 

From DSC:
Before we get to the announcements in more detail….
can you imagine being a teacher, a professor, or a trainer — with all of the required applications
launched — if you were the presenter in this video at say the 12:45 mark?

If you are at all interesting in emerging technologies and what several pieces
of our future learning ecosystems — and meeting spaces — could easily look like,
you NEED to watch the entire presentation.


Also, they announced

Microsoft’s purchase of AltspaceVR…in virtual reality!
This clip shows them meeting in a virtual space.

 

 



 

The era of Windows Mixed Reality begins October 17 — from blogs.windows.com by Alex Kipman
Samsung unveils Windows Mixed Reality headset, AltSpaceVR joins Microsoft, SteamVR catalog coming to Windows Mixed Reality this holiday.

 

 

At an event in San Francisco we unveiled our vision for Windows Mixed Reality, announced SteamVR and AltSpaceVR are coming to Windows Mixed Reality, introduced the new Samsung Odyssey HMD, and kicked off the holiday shopping season by announcing the availability of pre-orders for Windows Mixed Reality headsets at the Microsoft Store.

 

Also see:

 



 

Inside VR & AR

Oct 4th, 2017

 

Microsoft held its long-awaited launch of Windows 10 Mixed Reality yesterday, and while most of the new devices and products had been leaked earlier, there were still some big takeaways. Here are some of them:

  • Mixed Reality: Microsoft gave a demo of what its new platform will do, covering the AR/VR spectrum with games, apps, and experiences. One such experience is Cliff House, a virtual work space and entertainment room.
  • Altspace VR: When the pioneering social VR app shut down this summer and was rescued by a “third party,” people wondered who that was. Turns out it was Microsoft, which acquired Altspace VR for an undisclosed amount. The acquisition was announced yesterday.
  • Steam VR and Halo: Microsoft had previously announced that its new Mixed Reality headsets would support Steam VR titles. Developers can now access that support, and consumers will be able to access it later this year. In addition to the hundreds of VR titles available on Steam, on Oct. 17, Microsoft will offer free downloads of Halo Recruit.
  • Odyssey and other headsets: The new Windows 10 platform is launching alongside a host of new headsets. In addition to the new Odyssey, which was made in partnership with Samsung, there are other headsets forthcoming from Acer, HP, Dell, Lenovo, and Asus.
  • 2018 Olympics: This was announced previously in June, but yesterday Microsoft briefed the press that Intel is partnering with the International Olympic Committee to bring Windows Mixed Reality experiences to the 2018 games.

 

 



 

 

 

 

 

 

100 Data and Analytics Predictions Through 2021 — from Gartner

From DSC:
I just wanted to include some excerpts (see below) from Gartner’s 100 Data and Analytics Predictions Through 2021 report. I do so to illustrate how technology’s impact continues to expand/grow in influence throughout many societies around the globe, as well as to say that if you want a sure thing job in the next 1-15 years, I would go into studying data science and/or artificial intelligence!

 



Excerpts:

As evidenced by its pervasiveness within our vast array of recently published Predicts 2017 research, it is clear that data and analytics are increasingly critical elements across most industries, business functions and IT disciplines. Most significantly, data and analytics are key to a successful digital business. This collection of more than 100 data-and-analytics-related Strategic Planning Assumptions (SPAs) or predictions through 2021, heralds several transformations and challenges ahead that CIOs and data and analytics leaders should embrace and include in their planning for successful strategies. Common themes across the discipline in general, and within particular business functions and industries, include:

  • Artificial intelligence (AI) is emerging as a core business and analytic competency. Beyond yesteryear’s hard-coded algorithms and manual data science activities, machine learning (ML) promises to transform business processes, reconfigure workforces, optimize infrastructure behavior and blend industries through rapidly improved decision making and process optimization.
  • Natural language is beginning to play a dual role in many organizations and applications as a source of input for analytic and other applications, and a variety of output, in addition to traditional analytic visualizations.
  • Information itself is being recognized as a corporate asset (albeit not yet a balance sheet asset), prompting organizations to become more disciplined about monetizing, managing and measuring it as they do with other assets. This includes “spending” it like cash, selling/licensing it to others, participating in emerging data marketplaces, applying asset management principles to improve its quality and availability, and quantifying its value and risks in a variety of ways.
  • Smart devices that both produce and consume Internet of Things (IoT) data will also move intelligent computing to the edge of business functions, enabling devices in almost every industry to operate and interact with humans and each other without a centralized command and control. The resulting opportunities for innovation are unbounded.
  • Trust becomes the watchword for businesses, devices and information, leading to the creation of digital ethics frameworks, accreditation and assessments. Most attempts at leveraging blockchain as a trust mechanism fail until technical limitations, particularly performance, are solved.

Education
Significant changes to the global education landscape have taken shape in 2016, and spotlight new and interesting trends for 2017 and beyond. “Predicts 2017: Education Gets Personal” is focused on several SPAs, each uniquely contributing to the foundation needed to create the digitalized education environments of the future. Organizations and institutions will require new strategies to leverage existing and new technologies to maximize benefits to the organization in fresh and
innovative ways.

  • By 2021, more than 30% of institutions will be forced to execute on a personalization strategy to maintain student enrollment.
  • By 2021, the top 100 higher education institutions will have to adopt AI technologies to stay competitive in research.

Artificial Intelligence
Business and IT leaders are stepping up to a broad range of opportunities enabled by AI, including autonomous vehicles, smart vision systems, virtual customer assistants, smart (personal) agents and natural-language processing. Gartner believes that this new general-purpose technology is just beginning a 75-year technology cycle that will have far-reaching implications for every industry. In “Predicts 2017: Artificial Intelligence,” we reflect on the near-term opportunities, and the potential burdens and risks that organizations face in exploiting AI. AI is changing the way in which organizations innovate and communicate their processes, products and services.

Practical strategies for employing AI and choosing the right vendors are available to data and analytics leaders right now.

  • By 2019, more than 10% of IT hires in customer service will mostly write scripts for bot interactions.
  • Through 2020, organizations using cognitive ergonomics and system design in new AI projects will achieve long-term success four times more often than others.
  • By 2020, 20% of companies will dedicate workers to monitor and guide neural networks.
  • By 2019, startups will overtake Amazon, Google, IBM and Microsoft in driving the AI economy with disruptive business solutions.
  • By 2019, AI platform services will cannibalize revenues for 30% of market-leading companies. “Predicts 2017: Drones”
  • By 2020, the top seven commercial drone manufacturers will all offer analytical software packages.
    “Predicts 2017: The Reinvention of Buying Behavior in Vertical-Industry Markets”
  • By 2021, 30% of net new revenue growth from industry-specific solutions will include AI technology.

Advanced Analytics and Data Science
Advanced analytics and data science are fast becoming mainstream solutions and competencies in most organizations, even supplanting traditional BI and analytics resources and budgets. They allow more types of knowledge and insights to be extracted from data. To become and remain competitive, enterprises must seek to adopt advanced analytics, and adapt their business models, establish specialist data science teams and rethink their overall strategies to keep pace with the competition. “Predicts 2017: Analytics Strategy and Technology” offers advice on overall strategy, approach and operational transformation to algorithmic business that leadership needs to build to reap the benefits.

  • By 2018, deep learning (deep neural networks [DNNs]) will be a standard component in 80% of data scientists’ tool boxes.
  • By 2020, more than 40% of data science tasks will be automated, resulting in increased productivity and broader usage by citizen data scientists.
  • By 2019, natural-language generation will be a standard feature of 90% of modern BI and analytics platforms.
  • By 2019, 50% of analytics queries will be generated using search, natural-language query or voice, or will be autogenerated.
  • By 2019, citizen data scientists will surpass data scientists in the amount of advanced analysis
    produced.

 

 

By 2020, 95% of video/image content will never be viewed by humans; instead, it will be vetted by machines that provide some degree of automated analysis.

 

 

Through 2020, lack of data science professionals will inhibit 75% of organizations from achieving the full potential of IoT.

 

 

 

 

Amazon and Codecademy team up for free Alexa skills training — from venturebeat.com by Khari Johnson

Excerpt:

Amazon and tech training app Codecademy have collaborated to create a series of free courses. Available today, the courses are meant to train developers as well as beginners how to create skills, the voice apps that interact with Alexa.

Since opening Alexa to third-party developers in 2015, more than 20,000 skills have been made available in the Alexa Skills Store.

 

 

 

 

DeepMind, Vodafone, Google & Facebook – Deep Learning & AI Highlights — from re-work.co by Nikita Johnson

Excerpt:

This week, the The Deep Learning Summit and AI Assistant Summit saw over 450 DL and AI experts and enthusiasts come together to learn from each other and explore the most recent research and progressions in the space. Over the past two days we’ve heard from the likes of Amazon, Facebook, Google, Vodafone, as well as Universities such as Cambridge, Warwick, UCL, Imperial, and exciting new startups like Jukedeck and Echobox. Topics have been incredibly diverse covering NLP, space exploration, ML for music composition, and many more.

We’ve collected some of our favourite takeaways from both tracks over the last two days, as well as hearing what our attendees thought.

What did we hear at the AI Assistant Summit?
I’m driving in France and google translate is automatically translating all the french road signs for me and directing me to my location, telling me my time of arrival – this is the future. There is no interface there is no screen.
Adi Chhabra, Evolution of AI & Machine Learning in Customer Experience – Beyond Interfaces, Vodafone

We are at the beginning of the era of assistance. In the future every employee will have an assistant to help him with decision making.
Christophe Bourguignat, Deep Learning for Conversational Intelligence on Analytics Data, Zelros

 

 

Six reasons why disruption is coming to learning departments — from feathercap.net, with thanks to Mr. Tim Seager for this resource

Excerpts:

  1. Training materials and interactions will not just be pre-built courses but any structured or unstructured content available to the organization.
  2. Curation of all learning, employee or any useful organizational content will become a whole lot easier.
  3. The learning department won’t have to build it all themselves.
  4. Learning bots and voice enabled learning.
  5. Current workplace learning systems and LMSs will go through a big transition or they will lose relevancy.
  6. Learning departments will go beyond onboarding, compliance training and leadership training and move to training everyone in the company on all job skills.

 

A successful example of this is Amazon.com. As a shopper on their site we have access to millions of  book and product SKUs. Amazon uses a combination of all three techniques to position the right book or product based on our behavior, peer experiences as well as having a semantic understanding of the product page we’re viewing. There’s no reason we can’t have the same experience on workplace learning systems where all viable learning content/ company content could be organized and disseminated to each learner for the right time and circumstance.

 

 



From DSC:
Several items of what Feathercap is saying in their solid posting remind me of a vision of a next generation learning platform:

  • Contributing to — and tapping into — streams of content
  • Lifelong learning and reinventing oneself
  • Artificial intelligence, including Natural Language Processing (NLP) and the use of voice to drive systems/functionality
  • Learning agents/bots
  • 24×7 access
  • Structured and unstructured learning
  • Socially-based means of learning
  • …and more

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 



 

 

The End of Typing: The Next Billion Mobile Users Will Rely on Video and Voice — from wsj.com by Eric Bellman
Tech companies are rethinking products for the developing world, creating new winners and losers

Excerpt:

The internet’s global expansion is entering a new phase, and it looks decidedly unlike the last one.

Instead of typing searches and emails, a wave of newcomers—“the next billion,” the tech industry calls them—is avoiding text, using voice activation and communicating with images.

 

 

From DSC:
The above article reminds me that our future learning platforms will be largely driven by our voices. That’s why I put it into my vision of a next generation learning platform.

 

 

 

 

 

 

Why Natural Language Processing is the Future of Business Intelligence — from dzone.com by Gur Tirosh
Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language. But now, they’re learning ours.

Excerpt:

Every time you ask Siri for directions, a complex chain of cutting-edge code is activated. It allows “her” to understand your question, find the information you’re looking for, and respond to you in a language that you understand. This has only become possible in the last few years. Until now, we have been interacting with computers in a way that they understand, rather than us. We have learned their language.

But now, they’re learning ours.

The technology underpinning this revolution in human-computer relations is Natural Language Processing (NLP). And it’s already transforming BI, in ways that go far beyond simply making the interface easier. Before long, business transforming, life changing information will be discovered merely by talking with a chatbot.

This future is not far away. In some ways, it’s already here.

What Is Natural Language Processing?
NLP, otherwise known as computational linguistics, is the combination of Machine Learning, AI, and linguistics that allows us to talk to machines as if they were human.

 

 

But NLP aims to eventually render GUIs — even UIs — obsolete, so that interacting with a machine is as easy as talking to a human.

 

 

 

 

Voice technology may be poised for a breakthrough with Chinese consumers. — from Shepherd Laughlin jwtintelligence.com

Excerpt:

Worldwide, more consumers are interacting with technology using their voices. As the Innovation Group London and Mindshare Futures found in our Speak Easy report, consumers who use voice technology think that it frees them from having to look at screens, helps them organize their lives, and is less mentally draining than traditional touch or typing devices.

Among the many markets where voice technology is catching on, China faces unique challenges and opportunities. The complex Chinese writing system means that current methods of selecting characters using keyboards can be slow and laborious, which suggests that fully functional voice technology would find an instant market. Spoken Chinese, however, has proven difficult for computers to decipher.

But voice technology is moving ahead anyway. 2015 saw the release of the LingLong DingDong, a product created through a partnership between iFlytek and JD.com, which has become known as China’s answer to the Amazon Echo. The device can understand both Mandarin and Cantonese. It plays music, gives directions, answers questions about the weather and the news, and more. The Tmall Genie, a similar product, functions using Alibaba’s voice assistant, AliGenie.

 

 

 

Amazon’s Alexa passes 15,000 skills, up from 10,000 in February — from techcrunch.com by Sarah Perez

Excerpt:

Amazon’s Alexa voice platform has now passed 15,000 skills — the voice-powered apps that run on devices like the Echo speaker, Echo Dot, newer Echo Show and others. The figure is up from the 10,000 skills Amazon officially announced back in February, which had then represented a 3x increase from September.

The new 15,000 figure was first reported via third-party analysis from Voicebot, and Amazon has now confirmed to TechCrunch that the number is accurate.

According to Voicebot, which only analyzed skills in the U.S., the milestone was reached for the first time on June 30, 2017. During the month of June, new skill introductions increased by 23 percent, up from the less than 10 percent growth that was seen in each of the prior three months.

The milestone also represents a more than doubling of the number of skills that were available at the beginning of the year, when Voicebot reported there were then 7,000 skills. That number was officially confirmed by Amazon at CES.

 

 


From DSC:
Again, I wonder…what are the implications for learning from this new, developing platform?


 

 

Video: 4 FAQs about Watson as tutor — from er.educause.edu by Satya Nitta

Excerpt:

How is IBM using Watson’s intelligent tutoring system? So we are attempting to mimic the best practices of human tutoring. The gold standard will always remain one on one human to human tutoring. The whole idea here is an intelligent tutoring system as a computing system that works autonomously with learners, so there is no human intervention. It’s basically pretending to be the teacher itself and it’s working with the learner. What we’re attempting to do is we’re attempting to basically put conversational systems, systems that understand human conversation and dialogue, and we’re trying to build a system that, in a very natural way, interacts with people through conversation. The system basically has the ability to ask questions, to answer questions, to know who you are and where you are in your learning journey, what you’re struggling with, what you’re strong on and it will personalize its pedagogy to you.

There’s a natural language understanding system and a machine learning system that’s trying to figure out where you are in your learning journey and what the appropriate intervention is for you. The natural language system enables this interaction that’s very rich and conversation-based, where you can basically have a human-like conversation with it and, to a large extent, it will try to understand and to retrieve the right things for you. Again the most important thing is that we will set the expectations appropriately and we have appropriate exit criteria for when the system doesn’t actually understand what you’re trying to do.

 

 

 

The case for a next generation learning platform [Grush & Christian]

 

The case for a next generation learning platform — from campustechnology.com by Mary Grush & Daniel Christian

Excerpt (emphasis DSC):

Grush: Then what are some of the implications you could draw from metrics like that one?

Christian: As we consider all the investment in those emerging technologies, the question many are beginning to ask is, “How will these technologies impact jobs and the makeup of our workforce in the future?”

While there are many thoughts and questions regarding the cumulative impact these technologies will have on our future workforce (e.g., “How many jobs will be displaced?”), the consensus seems to be that there will be massive change.

Whether our jobs are completely displaced or if we will be working alongside robots, chatbots, workbots, or some other forms of AI-backed personal assistants, all of us will need to become lifelong learners — to be constantly reinventing ourselves. This assertion is also made in the aforementioned study from McKinsey: “AI promises benefits, but also poses urgent challenges that cut across firms, developers, government, and workers. The workforce needs to be re-skilled to exploit AI rather than compete with it…”

 

 

A side note from DSC:
I began working on this vision prior to 2010…but I didn’t officially document it until 2012.

 

The Living [Class] Room -- by Daniel Christian -- July 2012 -- a second device used in conjunction with a Smart/Connected TV

 

Learning from the Living [Class] Room:

A global, powerful, next generation learning platform

 

What does the vision entail?

  • A new, global, collaborative learning platform that offers more choice, more control to learners of all ages – 24×7 – and could become the organization that futurist Thomas Frey discusses here with Business Insider:

“I’ve been predicting that by 2030 the largest company on the internet is going to be an education-based company that we haven’t heard of yet,” Frey, the senior futurist at the DaVinci Institute think tank, tells Business Insider.

  • A learner-centered platform that is enabled by – and reliant upon – human beings but is backed up by a powerful suite of technologies that work together in order to help people reinvent themselves quickly, conveniently, and extremely cost-effectively
  • A customizable learning environment that will offer up-to-date streams of regularly curated content (i.e., microlearning) as well as engaging learning experiences
  • Along these lines, a lifelong learner can opt to receive an RSS feed on a particular topic until they master that concept; periodic quizzes (i.e., spaced repetition) determines that mastery. Once mastered, the system will ask the learner whether they still want to receive that particular stream of content or not.
  • A Netflix-like interface to peruse and select plugins to extend the functionality of the core product
  • An AI-backed system of analyzing employment trends and opportunities will highlight those courses and streams of content that will help someone obtain the most in-demand skills
  • A system that tracks learning and, via Blockchain-based technologies, feeds all completed learning modules/courses into learners’ web-based learner profiles
  • A learning platform that provides customized, personalized recommendation lists – based upon the learner’s goals
  • A platform that delivers customized, personalized learning within a self-directed course (meant for those content creators who want to deliver more sophisticated courses/modules while moving people through the relevant Zones of Proximal Development)
  • Notifications and/or inspirational quotes will be available upon request to help provide motivation, encouragement, and accountability – helping learners establish habits of continual, lifelong-based learning
  • (Potentially) An online-based marketplace, matching learners with teachers, professors, and other such Subject Matter Experts (SMEs)
  • (Potentially) Direct access to popular job search sites
  • (Potentially) Direct access to resources that describe what other companies do/provide and descriptions of any particular company’s culture (as described by current and former employees and freelancers)
  • (Potentially) Integration with one-on-one tutoring services

Further details here >>

 

 

 



Addendum from DSC (regarding the resource mentioned below):
Note the voice recognition/control mechanisms on Westinghouse’s new product — also note the integration of Amazon’s Alexa into a “TV.”



 

Westinghouse’s Alexa-equipped Fire TV Edition smart TVs are now available — from theverge.com by Chaim Gartenberg

 

The key selling point, of course, is the built-in Amazon Fire TV, which is controlled with the bundled Voice Remote and features Amazon’s Alexa assistant.

 

 

 

Finally…also see:

  • NASA unveils a skill for Amazon’s Alexa that lets you ask questions about Mars — from geekwire.com by Kevin Lisota
  • Holographic storytelling — from jwtintelligence.com
    The stories of Holocaust survivors are brought to life with the help of interactive 3D technologies.
    New Dimensions in Testimony is a new way of preserving history for future generations. The project brings to life the stories of Holocaust survivors with 3D video, revealing raw first-hand accounts that are more interactive than learning through a history book.  Holocaust survivor Pinchas Gutter, the first subject of the project, was filmed answering over 1000 questions, generating approximately 25 hours of footage. By incorporating natural language processing from the USC Institute for Creative Technologies (ICT), people are able to ask Gutter’s projected image questions that trigger relevant responses.

 

 

 

 

Winner takes all — from by Michael Moe, Luben Pampoulov, Li Jiang, Nick Franco, & Suzee Han

 

We did a lot of things that seemed crazy at the time. Many of those crazy things now have over a billion users, like Google Maps, YouTube, Chrome, and Android.

— Larry Page, CEO, Alphabet

 

 

Excerpt:

An alphabet is a collection of letters that represent language. Alphabet, accordingly, is a collection of companies that represent the many bets Larry Page is making to ensure his platform is built to not only survive, but to thrive in a future defined by accelerating digital disruption. It’s an “Alpha” bet on a diversified platform of assets.

If you look closely, the world’s top technology companies are making similar bets.

 


 

 

Technology in general and the Internet in particular is all about a disproportionate gains to the leader in a category. Accordingly, as technology leaders like Facebook, Alphabet, and Amazon survey the competitive landscape, they have increasingly aimed to develop and acquire emerging technology capabilities across a broad range of complementary categories.

 

 

 

Introducing Deep Learning and Neural Networks — Deep Learning for Rookies — from medium.com by Nahua Kang

Excerpts:

Here’s a short list of general tasks that deep learning can perform in real situations:

  1. Identify faces (or more generally image categorization)
  2. Read handwritten digits and texts
  3. Recognize speech (no more transcribing interviews yourself)
  4. Translate languages
  5. Play computer games
  6. Control self-driving cars (and other types of robots)

And there’s more. Just pause for a second and imagine all the things that deep learning could achieve. It’s amazing and perhaps a bit scary!

Currently there are already many great courses, tutorials, and books on the internet covering this topic, such as (not exhaustive or in specific order):

  1. Michael Nielsen’s Neural Networks and Deep Learning
  2. Geoffrey Hinton’s Neural Networks for Machine Learning
  3. Goodfellow, Bengio, & Courville’s Deep Learning
  4. Ian Trask’s Grokking Deep Learning,
  5. Francois Chollet’s Deep Learning with Python
  6. Udacity’s Deep Learning Nanodegree (not free but high quality)
  7. Udemy’s Deep Learning A-Z ($10–$15)
  8. Stanford’s CS231n and CS224n
  9. Siraj Raval’s YouTube channel

The list goes on and on. David Venturi has a post for freeCodeCamp that lists many more resources. Check it out here.

 

 

 

 

 

When AI can transcribe everything — from theatlantic.com by Greg Noone
Tech companies are rapidly developing tools to save people from the drudgery of typing out conversations—and the impact could be profound.

Excerpt:

Despite the recent emergence of browser-based transcription aids, transcription’s an area of drudgery in the modern Western economy where machines can’t quite squeeze human beings out of the equation. That is until last year, when Microsoft built one that could.

Automatic speech recognition, or ASR, is an area that has gripped the firm’s chief speech scientist, Xuedong Huang, since he entered a doctoral program at Scotland’s Edinburgh University. “I’d just left China,” he says, remembering the difficulty he had in using his undergraduate knowledge of the American English to parse the Scottish brogue of his lecturers. “I wished every lecturer and every professor, when they talked in the classroom, could have subtitles.”

“That’s the thing with transcription technology in general,” says Prenger. “Once the accuracy gets above a certain bar, everyone will probably start doing their transcriptions that way, at least for the first several rounds.” He predicts that, ultimately, automated transcription tools will increase both the supply of and the demand for transcripts. “There could be a virtuous circle where more people expect more of their audio that they produce to be transcribed, because it’s now cheaper and easier to get things transcribed quickly. And so, it becomes the standard to transcribe everything.”

 

 

 

 

 
© 2025 | Daniel Christian