Why Jensen Huang and Marc Benioff see ‘gigantic’ opportunity for agentic AI — from venturebeat.com by Taryn Plumb

Going forward, the opportunity for AI agents will be “gigantic,” according to Nvidia founder and CEO Jensen Huang.

Already, progress is “spectacular and surprising,” with AI development moving faster and faster and the industry getting into the “flywheel zone” that technology needs to advance, Huang said in a fireside chat at Salesforce’s flagship event Dreamforce this week.

“This is an extraordinary time,” Huang said while on stage with Marc Benioff, Salesforce chair, CEO and co-founder. “In no time in history has technology moved faster than Moore’s Law. We’re moving way faster than Moore’s Law, are arguably reasonably Moore’s Law squared.”

“We’ll have agents working with agents, agents working with us,” said Huang.

 

One left
byu/jim_andr inOpenAI

 

From DSC:
I’m not trying to gossip here. I post this because Sam Altman is the head of arguably one of the most powerful companies in the world today — at least in terms of introducing change to a variety of societies throughout the globe (both positive and negative). So when we’ve now seen almost the entire leadership team head out the door, this certainly gives me major pause. I don’t like it.
Items like the ones below begin to capture some of why I’m troubled and suspicious about these troubling moves.

 

This article….

Artificial Intelligence and Schools: When Tech Makers and Educators Collaborate, AI Doesn’t Have to be Scary — from the74million.org by Edward Montalvo
AI is already showing us how to make education more individualized and equitable.

The XQ Institute shares this mindset as part of our mission to reimagine the high school learning experience so it’s more relevant and engaging for today’s learners, while better preparing them for the future. We see AI as a tool with transformative potential for educators and makers to leverage — but only if it’s developed and implemented with ethics, transparency and equity at the forefront. That’s why we’re building partnerships between educators and AI developers to ensure that products are shaped by the real needs and challenges of students, teachers and schools. Here’s how we believe all stakeholders can embrace the Department’s recommendations through ongoing collaborations with tech leaders, educators and students alike.

…lead me to the XQ Institute, and I very much like what I’m initially seeing! Here are some excerpts from their website:

 


 

FlexOS’ Stay Ahead Edition #43 — from flexos.work

People started discussing what they could do with Notebook LM after Google launched the audio overview, where you can listen to 2 hosts talking in-depth about the documents you upload. Here are what it can do:

  • Summarization: Automatically generate summaries of uploaded documents, highlighting key topics and suggesting relevant questions.
  • Question Answering: Users can ask NotebookLM questions about their uploaded documents, and answers will be provided based on the information contained within them.
  • Idea Generation: NotebookLM can assist with brainstorming and developing new ideas.
  • Source Grounding: A big plus against AI chatbot hallucination, NotebookLM allows users to ground the responses in specific documents they choose.
  • …plus several other items

The posting also lists several ideas to try with NotebookLM such as:

Idea 2: Study Companion

  • Upload all your course materials and ask NotebookLM to turn them into Question-and-Answer format, a glossary, or a study guide.
  • Get a breakdown of the course materials to understand them better.

Google’s NotebookLM: A Game-Changer for Education and Beyond — from ai-supremacy.com by Michael Spencer and Nick Potkalitsky
AI Tools: Breaking down Google’s latest AI tool and its implications for education.

“Google’s AI note-taking app NotebookLM can now explain complex topics to you out loud”

With more immersive text-to-video and audio products soon available and the rise of apps like Suno AI, how we “experience” Generative AI is also changing from a chatbot of 2 years ago, to a more multi-modal educational journey. The AI tools on the research and curation side are also starting to reflect these advancements.


Meet Google NotebookLM: 10 things to know for educators — from ditchthattextbook.com by Matt Miller

1. Upload a variety of sources for NotebookLM to use. 
You can use …

  • websites
  • PDF files
  • links to websites
  • any text you’ve copied
  • Google Docs and Slides
  • even Markdown

You can’t link it to YouTube videos, but you can copy/paste the transcript (and maybe type a little context about the YouTube video before pasting the transcript).

2. Ask it to create resources.
3. Create an audio summary.
4. Chat with your sources.
5. Save (almost) everything. 


NotebookLM summarizes my dissertation — from darcynorman.net by D’Arcy Norman, PhD

I finally tried out Google’s newly-announced NotebookLM generative AI application. It provides a set of LLM-powered tools to summarize documents. I fed it my dissertation, and am surprised at how useful the output would be.

The most impressive tool creates a podcast episode, complete with dual hosts in conversation about the document. First – these are AI-generated hosts. Synthetic voices, speaking for synthetic hosts. And holy moly is it effective. Second – although I’d initially thought the conversational summary would be a dumb gimmick, it is surprisingly powerful.


4 Tips for Designing AI-Resistant Assessments — from techlearning.com by Steve Baule and Erin Carter
As AI continues to evolve, instructors must modify their approach by designing meaningful, rigorous assessments.

As instructors work through revising assessments to be resistant to generation by AI tools with little student input, they should consider the following principles:

  • Incorporate personal experiences and local content into assignments
  • Ask students for multi-modal deliverables
  • Assess the developmental benchmarks for assignments and transition assignments further up Bloom’s Taxonomy
  • Consider real-time and oral assignments

Google CEO Sundar Pichai announces $120M fund for global AI education — from techcrunch.com by Anthony Ha

He added that he wants to avoid a global “AI divide” and that Google is creating a $120 million Global AI Opportunity Fund through which it will “make AI education and training available in communities around the world” in partnership with local nonprofits and NGOs.


Educators discuss the state of creativity in an AI world — from gettingsmart.com by Joe & Kristin Merrill, LaKeshia Brooks, Dominique’ Harbour, Erika Sandstrom

Key Points

  • AI allows for a more personalized learning experience, enabling students to explore creative ideas without traditional classroom limitations.
  • The focus of technology integration should be on how the tool is used within lessons, not just the tool itself

Addendum on 9/27/24:

Google’s NotebookLM enhances AI note-taking with YouTube, audio file sources, sharable audio discussions — from techcrunch.com by Jagmeet Singh

Google on Thursday announced new updates to its AI note-taking and research assistant, NotebookLM, allowing users to get summaries of YouTube videos and audio files and even create sharable AI-generated audio discussions

NotebookLM adds audio and YouTube support, plus easier sharing of Audio Overviews — from blog.google

 

AI researcher Jim Fan has had a charmed career. He was OpenAI’s first intern before he did his PhD at Stanford with “godmother of AI,” Fei-Fei Li. He graduated into a research scientist position at Nvidia and now leads its Embodied AI “GEAR” group. The lab’s current work spans foundation models for humanoid robots to agents for virtual worlds. Jim describes a three-pronged data strategy for robotics, combining internet-scale data, simulation data and real world robot data. He believes that in the next few years it will be possible to create a “foundation agent” that can generalize across skills, embodiments and realities—both physical and virtual. He also supports Jensen Huang’s idea that “Everything that moves will eventually be autonomous.”


Runway Partners with Lionsgate — from runwayml.com via The Rundown AI
Runway and Lionsgate are partnering to explore the use of AI in film production.

Lionsgate and Runway have entered into a first-of-its-kind partnership centered around the creation and training of a new AI model, customized on Lionsgate’s proprietary catalog. Fundamentally designed to help Lionsgate Studios, its filmmakers, directors and other creative talent augment their work, the model generates cinematic video that can be further iterated using Runway’s suite of controllable tools.

Per The Rundown: Lionsgate, the film company behind The Hunger Games, John Wick, and Saw, teamed up with AI video generation company Runway to create a custom AI model trained on Lionsgate’s film catalogue.

The details:

  • The partnership will develop an AI model specifically trained on Lionsgate’s proprietary content library, designed to generate cinematic video that filmmakers can further manipulate using Runway’s tools.
  • Lionsgate sees AI as a tool to augment and enhance its current operations, streamlining both pre-production and post-production processes.
  • Runway is considering ways to offer similar custom-trained models as templates for individual creators, expanding access to AI-powered filmmaking tools beyond major studios.

Why it matters: As many writers, actors, and filmmakers strike against ChatGPT, Lionsgate is diving head-first into the world of generative AI through its partnership with Runway. This is one of the first major collabs between an AI startup and a major Hollywood company — and its success or failure could set precedent for years to come.


A bottle of water per email: the hidden environmental costs of using AI chatbots — from washingtonpost.com by Pranshu Verma and Shelly Tan (behind paywall)
AI bots generate a lot of heat, and keeping their computer servers running exacts a toll.

Each prompt on ChatGPT flows through a server that runs thousands of calculations to determine the best words to use in a response.

In completing those calculations, these servers, typically housed in data centers, generate heat. Often, water systems are used to cool the equipment and keep it functioning. Water transports the heat generated in the data centers into cooling towers to help it escape the building, similar to how the human body uses sweat to keep cool, according to Shaolei Ren, an associate professor at UC Riverside.

Where electricity is cheaper, or water comparatively scarce, electricity is often used to cool these warehouses with large units resembling air-conditioners, he said. That means the amount of water and electricity an individual query requires can depend on a data center’s location and vary widely.


AI, Humans and Work: 10 Thoughts. — from rishad.substack.com by Rishad Tobaccowala
The Future Does Not Fit in the Containers of the Past. Edition 215.

10 thoughts about AI, Humans and Work in 10 minutes:

  1. AI is still Under-hyped.
  2. AI itself will be like electricity and is unlikely to be a differentiator for most firms.
  3. AI is not alive but can be thought of as a new species.
  4. Knowledge will be free and every knowledge workers job will change in 2025.
  5. The key about AI is not to ask what AI will do to us but what AI can do for us.
  6. Plus 5 other thoughts

 

 

10 Ways I Use LLMs like ChatGPT as a Professor — from automatedteach.com by Graham Clay
ChatGPT-4o, Gemini 1.5 Pro, Claude 3.5 Sonnet, custom GPTs – you name it, I use it. Here’s how…

Excerpt:

  1. To plan lessons (especially activities)
  2. To create course content (especially quizzes)
  3. To tutor my students
  4. To grade faster and give better feedback
  5. To draft grant applications
  6. Plus 5 other items

From Caution to Calcification to Creativity: Reanimating Education with AI’s Frankenstein Potential — from nickpotkalitsky.substack.com by Nick Potkalitsky
A Critical Analysis of AI-Assisted Lesson Planning: Evaluating Efficacy and Pedagogical Implications

Excerpt (emphasis DSC):

As we navigate the rapidly evolving landscape of artificial intelligence in education, a troubling trend has emerged. What began as cautious skepticism has calcified into rigid opposition. The discourse surrounding AI in classrooms has shifted from empirical critique to categorical rejection, creating a chasm between the potential of AI and its practical implementation in education.

This hardening of attitudes comes at a significant cost. While educators and policymakers debate, students find themselves caught in the crossfire. They lack safe, guided access to AI tools that are increasingly ubiquitous in the world beyond school walls. In the absence of formal instruction, many are teaching themselves to use these tools, often in less than productive ways. Others live in a state of constant anxiety, fearing accusations of AI reliance in their work. These are just a few symptoms of an overarching educational culture that has become resistant to change, even as the world around it transforms at an unprecedented pace.

Yet, as this calcification sets in, I find myself in a curious position: the more I thoughtfully integrate AI into my teaching practice, the more I witness its potential to enhance and transform education


NotebookLM and Google’s Multimodal Vision for AI-Powered Learning Tools — from marcwatkins.substack.com by Marc Watkins

A Variety of Use Cases

  • Create an Interactive Syllabus
  • Presentation Deep Dive: Upload Your Slides
  • Note Taking: Turn Your Chalkboard into a Digital Canvas
  • Explore a Reading or Series of Readings
  • Help Navigating Feedback
  • Portfolio Building Blocks

Must-Have Competencies and Skills in Our New AI World: A Synthesis for Educational Reform — from er.educause.edu by Fawzi BenMessaoud
The transformative impact of artificial intelligence on educational systems calls for a comprehensive reform to prepare future generations for an AI-integrated world.

The urgency to integrate AI competencies into education is about preparing students not just to adapt to inevitable changes but to lead the charge in shaping an AI-augmented world. It’s about equipping them to ask the right questions, innovate responsibly, and navigate the ethical quandaries that come with such power.

AI in education should augment and complement their aptitude and expertise, to personalize and optimize the learning experience, and to support lifelong learning and development. AI in education should be a national priority and a collaborative effort among all stakeholders, to ensure that AI is designed and deployed in an ethical, equitable, and inclusive way that respects the diversity and dignity of all learners and educators and that promotes the common good and social justice. AI in education should be about the production of AI, not just the consumption of AI, meaning that learners and educators should have the opportunity to learn about AI, to participate in its creation and evaluation, and to shape its impact and direction.

 

Top Software Engineering Newsletters in 2024 — from ai-supremacy.com by Michael Spencer
Including a very select few ML, AI and product Newsletters into the mix for Software Engineers.

This is an article specifically for the software engineers and developers among you.

In the past year (2023-2024) professionals are finding more value in Newsletters than ever before (especially on Substack).

As working from home took off, the nature of mentorship and skill acquisition has also evolved and shifted. Newsletters with pragmatic advice on our careers it turns out, are super valuable. This article is a resource list. Are you a software developer, work with one or know someone who is or wants to be?

 

Legal budgets will get an AI-inspired makeover in 2025: survey — from legaldive.com by Justin Bachman
Nearly every general counsel is budgeting to add generative AI tools to their departments – and they’re all expecting to realize efficiencies by doing so.

Dive Brief:

  • Nearly all general counsel say their budgets are up slightly after wrestling with widespread cuts last year. And most of them, 61%, say they expect slightly larger budgets next year as well, an average of 5% more, according to the 2025 In-House Legal Budgeting Report from Axiom and Wakefield Research. Technology was ranked as the top in-house investment priority for both 2024 and 2025 for larger companies.
  • Legal managers predict their companies will boost investment on technology and real estate/facilities in 2025, while reducing outlays for human resources and mergers and acquisition activity, according to the survey. This mix of changing priorities might disrupt legal budgets.
  • Among the planned legal tech spending, the top three areas for investment are virtual legal assistants/AI-powered chatbots (35%); e-billing and spend-management software (31%); and contract management platforms (30%).
 



“Who to follow in AI” in 2024? — from ai-supremacy.com by Michael Spencer
Part III – #35-55 – I combed the internet, I found the best sources of AI insights, education and articles. LinkedIn | Newsletters | X | YouTube | Substack | Threads | Podcasts

This list features both some of the best Newsletters on AI and people who make LinkedIn posts about AI papers, advances and breakthroughs. In today’s article we’ll be meeting the first 19-34, in a list of 180+.

Newsletter Writers
YouTubers
Engineers
Researchers who write
Technologists who are Creators
AI Educators
AI Evangelists of various kinds
Futurism writers and authors

I have been sharing the list in reverse chronological order on LinkedIn here.


Inside Google’s 7-Year Mission to Give AI a Robot Body — from wired.com by Hans Peter Brondmo
As the head of Alphabet’s AI-powered robotics moonshot, I came to believe many things. For one, robots can’t come soon enough. For another, they shouldn’t look like us.


Learning to Reason with LLMs — from openai.com
We are introducing OpenAI o1, a new large language model trained with reinforcement learning to perform complex reasoning. o1 thinks before it answers—it can produce a long internal chain of thought before responding to the user.


Items re: Microsoft Copilot:

Also see this next video re: Copilot Pages:


Sal Khan on the critical human skills for an AI age — from time.com by Kevin J. Delaney

As a preview of the upcoming Summit interview, here are Khan’s views on two critical questions, edited for space and clarity:

  1. What are the enduring human work skills in a world with ever-advancing AI? Some people say students should study liberal arts. Others say deep domain expertise is the key to remaining professionally relevant. Others say you need to have the skills of a manager to be able to delegate to AI. What do you think are the skills or competencies that ensure continued relevance professionally, employability, etc.?
  2. A lot of organizations are thinking about skills-based approaches to their talent. It involves questions like, ‘Does someone know how to do this thing or not?’ And what are the ways in which they can learn it and have some accredited way to know they actually have done it? That is one of the ways in which people use Khan Academy. Do you have a view of skills-based approaches within workplaces, and any thoughts on how AI tutors and training fit within that context?

 



Introducing OpenAI o1 – from openai.com

We’ve developed a new series of AI models designed to spend more time thinking before they respond. Here is the latest news on o1 research, product and other updates.




Something New: On OpenAI’s “Strawberry” and Reasoning — from oneusefulthing.org by Ethan Mollick
Solving hard problems in new ways

The new AI model, called o1-preview (why are the AI companies so bad at names?), lets the AI “think through” a problem before solving it. This lets it address very hard problems that require planning and iteration, like novel math or science questions. In fact, it can now beat human PhD experts in solving extremely hard physics problems.

To be clear, o1-preview doesn’t do everything better. It is not a better writer than GPT-4o, for example. But for tasks that require planning, the changes are quite large.


What is the point of Super Realistic AI? — from Heather Cooper who runs Visually AI on Substack

The arrival of super realistic AI image generation, powered by models like Midjourney, FLUX.1, and Ideogram, is transforming the way we create and use visual content.

Recently, many creators (myself included) have been exploring super realistic AI more and more.

But where can this actually be used?

Super realistic AI image generation will have far-reaching implications across various industries and creative fields. Its importance stems from its ability to bridge the gap between imagination and visual representation, offering multiple opportunities for innovation and efficiency.

Heather goes on to mention applications in:

  • Creative Industries
  • Entertainment and Media
  • Education and Training

NotebookLM now lets you listen to a conversation about your sources — from blog.google by Biao Wang
Our new Audio Overview feature can turn documents, slides, charts and more into engaging discussions with one click.

Today, we’re introducing Audio Overview, a new way to turn your documents into engaging audio discussions. With one click, two AI hosts start up a lively “deep dive” discussion based on your sources. They summarize your material, make connections between topics, and banter back and forth. You can even download the conversation and take it on the go.


Bringing generative AI to video with Adobe Firefly Video Model — from blog.adobe.com by Ashley Still

Over the past several months, we’ve worked closely with the video editing community to advance the Firefly Video Model. Guided by their feedback and built with creators’ rights in mind, we’re developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage.

Just like our other Firefly generative AI models, editors can create with confidence knowing the Adobe Firefly Video Model is designed to be commercially safe and is only trained on content we have permission to use — never on Adobe users’ content.

We’re excited to share some of the incredible progress with you today — all of which is designed to be commercially safe and available in beta later this year. To be the first to hear the latest updates and get access, sign up for the waitlist here.

 

The Most Popular AI Tools for Instructional Design (September, 2024) — from drphilippahardman.substack.com by Dr. Philippa Hardman
The tools we use most, and how we use them

This week, as I kick off the 20th cohort of my AI-Learning Design bootcamp, I decided to do some analysis of the work habits of the hundreds of amazing AI-embracing instructional designers who I’ve worked with over the last year or so.

My goal was to answer the question: which AI tools do we use most in the instructional design process, and how do we use them?

Here’s where we are in September, 2024:


Developing Your Approach to Generative AI — from scholarlyteacher.com by Caitlin K. Kirby,  Min Zhuang, Imari Cheyne Tetu, & Stephen Thomas (Michigan State University)

As generative AI becomes integrated into workplaces, scholarly work, and students’ workflows, we have the opportunity to take a broad view of the role of generative AI in higher education classrooms. Our guiding questions are meant to serve as a starting point to consider, from each educator’s initial reaction and preferences around generative AI, how their discipline, course design, and assessments may be impacted, and to have a broad view of the ethics of generative AI use.



The Impact of AI in Advancing Accessibility for Learners with Disabilities — from er.educause.edu by Rob Gibson

AI technology tools hold remarkable promise for providing more accessible, equitable, and inclusive learning experiences for students with disabilities.


 

Risepoint Releases Voice of the Online Learner Report — from academicpartnerships.com by Risepoint; via Jeff Selingo on LinkedIn

The Voice of the Online Learner report highlights the journey of online learners, and the vital role education plays in their personal and professional growth and development. This year’s report compiled responses from over 3,400 prospective, current, and recently graduated online learners.

Key findings from this year’s Voice of the Online Learner report include:

  • Decision Factors for Online Students: When evaluating online programs, the key decision for students is cost, with 86% saying it’s extremely or very important. After cost, 84% said accreditation is most important, 75% said program concentrations, followed by 68% of respondents who said it was the time it took to achieve a degree. 38% selected the lowest cost program they evaluated (up from 29% in 2023).
  • Perception of Online Programs: Students see online programs as equally valid or better at meeting their needs than on-campus degree programs. 83% of respondents prefer the flexibility of online programs over hybrid or on-campus options, while 90% feel online programs are comparable to or better than an on-campus degree. 83% (up from 71% last year) want no on campus requirement.
  • Degree ROI: 92% of students who graduated from online degree programs reported tangible benefits to their career, including 44% who received a salary increase.
  • Value of the Degree: Career outcomes continue to be very important for students pursuing their degree.86% felt their degrees were important in achieving their career goals, and 61% of online undergraduates are likely to enroll in additional online degree programs to stay competitive.
  • Importance of Local Programs: Attending a university or college in the state where the student lives and works is also an important decision factor, with 70% enrolled at a higher education institution in the state where they live and/or work. These students say that local proximity creates greater trust, and that they also want to ensure the programs meet local licensing or accreditation requirements, when relevant.
  • Demographics: The average age for online students enrolled in undergraduate programs is 36 years old, while the average age for students enrolled in graduate programs is 38 years old. Of the students enrolled in undergraduate programs, 40% are first-generation college students.
  • Upskilling is lifelong: 86% of graduated and currently enrolled students are likely to do another online program in the future to upskill.
  • Generative AI is a concern: Students want guidance on generative AI, but 75% reported they have received none. 40% of students think it will affect their career positively and 40% believe it will impact them negatively. Nearly half (48%) have used it to help them study.
 

A third of all generative AI projects will be abandoned, says Gartner — from zdnet.com by Tiernan Ray
The high upfront cost of deployment is one of the challenges that can doom generative AI projects

Companies are “struggling” to find value in the generative artificial intelligence (Gen AI) projects they have undertaken and one-third of initiatives will end up getting abandoned, according to a recent report by analyst Gartner.

The report states at least 30% of Gen AI projects will be abandoned after the proof-of-concept stage by the end of 2025.

From DSC:
But I wouldn’t write off the other two thirds of projects that will make it. I wouldn’t write off the future of AI in our world. AI-based technologies are already massively impacting graphic design, film, media, and more creative outlets. See the tweet below for some examples of what I’m talking about.



 

The Six AI Use Case Families of Instructional Design — from drphilippahardman.substack.com by Dr. Phillipa Harman
Pushing AI beyond content creation

So what are the six families? Here’s the TLDR:

  1. Creative Ideation, aka using AI to spark novel ideas and innovative design concepts.
  2. Research & Analysis, aka using AI to rapidly gather and synthesise information from vast sources.
  3. Data-Driven Insights, aka using AI to extract meaningful patterns and predictions from complex datasets.
  4. …and more

Town Hall: Back to School with AI — from gettingsmart.com

Key Points

  • AI can help educators focus more on human interaction and critical thinking by automating tasks that consume time but don’t require human empathy or creativity.
  • Encouraging students to use AI as a tool for learning and creativity can significantly boost their engagement and self-confidence, as seen in examples from student experiences shared in the discussion.

The speakers discuss various aspects of AI, including its potential to augment human intelligence and the need to focus on uniquely human competencies in the face of technological advancements. They also emphasize the significance of student agency, with examples of student-led initiatives and feedback sessions that reveal how young learners are already engaging with AI in innovative ways. The episode underscores the necessity for educators and administrators to stay informed and actively participate in the ongoing dialogue about AI to ensure its effective and equitable implementation in schools.


The video below is from The Artifice of Twinning by Marc Watkins


How AI Knocks Down Classroom Barriers — from gettingsmart.com by Alyssa Faubion

Key Points

  • AI can be a powerful tool to break down language, interest, and accessibility barriers in the classroom, making learning more inclusive and engaging.
  • Incorporating AI tools in educational settings can help build essential skills that AI can’t replace, such as creativity and problem-solving, preparing students for future job markets.

 

From DSC:
Anyone who is involved in putting on conferences should at least be aware that this kind of thing is now possible!!! Check out the following posting from Adobe (with help from Tata Consultancy Services (TCS).


From impossible to POSSIBLE: Tata Consultancy Services uses Adobe Firefly generative AI and Acrobat AI Assistant to turn hours of work into minutes — from blog.adobe.com

This year, the organizers — innovative industry event company Beyond Ordinary Events — turned to Tata Consultancy Services (TCS) to make the impossible “possible.” Leveraging Adobe generative AI technology across products like Adobe Premiere Pro and Acrobat, they distilled hours of video content in minutes, delivering timely dispatches to thousands of attendees throughout the conference.

For POSSIBLE ’24, Muche had an idea for a daily dispatch summarizing each day’s sessions so attendees wouldn’t miss a single insight. But timing would be critical. The dispatch needed to reach attendees shortly after sessions ended to fuel discussions over dinner and carry the excitement over to the next day.

The workflow started in Adobe Premiere Pro, with the writer opening a recording of each session and using the Speech to Text feature to automatically generate a transcript. They saved the transcript as a PDF file and opened it in Adobe Acrobat Pro. Then, using Adobe Acrobat AI Assistant, the writer asked for a session summary.

It was that fast and easy. In less than four minutes, one person turned a 30-minute session into an accurate, useful summary ready for review and publication.

By taking advantage of templates, the designer then added each AI-enabled summary to the newsletter in minutes. With just two people and generative AI technology, TCS accomplished the impossible — for the first time delivering an informative, polished newsletter to all 3,500 conference attendees just hours after the last session of the day.

 
© 2024 | Daniel Christian