AI’s Trillion-Dollar Opportunity — from bain.com by David Crawford, Jue Wang, and Roy Singh
The market for AI products and services could reach between $780 billion and $990 billion by 2027.

At a Glance

  • The big cloud providers are the largest concentration of R&D, talent, and innovation today, pushing the boundaries of large models and advanced infrastructure.
  • Innovation with smaller models (open-source and proprietary), edge infrastructure, and commercial software is reaching enterprises, sovereigns, and research institutions.
  • Commercial software vendors are rapidly expanding their feature sets to provide the best use cases and leverage their data assets.

Accelerated market growth. Nvidia’s CEO, Jensen Huang, summed up the potential in the company’s Q3 2024 earnings call: “Generative AI is the largest TAM [total addressable market] expansion of software and hardware that we’ve seen in several decades.”


And on a somewhat related note (i.e., emerging technologies), also see the following two postings:

Surgical Robots: Current Uses and Future Expectations — from medicalfuturist.com by Pranavsingh Dhunnoo
As the term implies, a surgical robot is an assistive tool for performing surgical procedures. Such manoeuvres, also called robotic surgeries or robot-assisted surgery, usually involve a human surgeon controlling mechanical arms from a control centre.

Key Takeaways

  • Robots’ potentials have been a fascination for humans and have even led to a booming field of robot-assisted surgery.
  • Surgical robots assist surgeons in performing accurate, minimally invasive procedures that are beneficial for patients’ recovery.
  • The assistance of robots extend beyond incisions and includes laparoscopies, radiosurgeries and, in the future, a combination of artificial intelligence technologies to assist surgeons in their craft.

Proto hologram tech allows cancer patients to receive specialist care without traveling large distances — from inavateonthenet.net

“Working with the team from Proto to bring to life, what several years ago would have seemed impossible, is now going to allow West Cancer Center & Research Institute to pioneer options for patients to get highly specialized care without having to travel to large metro areas,” said West Cancer’s CEO, Mitch Graves.




Clone your voice in minutes: The AI trick 95% don’t know about — from aidisruptor.ai by Alex McFarland
Warning: May cause unexpected bouts of talking to yourself

Now that you’ve got your voice clone, what can you do with it?

  1. Content Creation:
    • Podcast Production: Record episodes in half the time. Your listeners won’t know the difference, but your schedule will thank you.
    • Audiobook Narration: Always wanted to narrate your own book? Now you can, without spending weeks in a recording studio.
    • YouTube Videos: Create voiceovers for your videos in multiple languages. World domination, here you come!
  2. Business Brilliance:
    • Customer Service: Personalized automated responses that actually sound personal.
    • Training Materials: Create engaging e-learning content in your own voice, minus the hours of recording.
    • Presentations: Never worry about losing your voice before a big presentation again. Your clone’s got your back.

185 real-world gen AI use cases from the world’s leading organizations — from blog.google by Brian Hall; via Daniel Nest’s Why Try AI

In a matter of months, organizations have gone from AI helping answer questions, to AI making predictions, to generative AI agents. What makes AI agents unique is that they can take actions to achieve specific goals, whether that’s guiding a shopper to the perfect pair of shoes, helping an employee looking for the right health benefits, or supporting nursing staff with smoother patient hand-offs during shifts changes.

In our work with customers, we keep hearing that their teams are increasingly focused on improving productivity, automating processes, and modernizing the customer experience. These aims are now being achieved through the AI agents they’re developing in six key areas: customer service; employee empowerment; code creation; data analysis; cybersecurity; and creative ideation and production.

Here’s a snapshot of how 185 of these industry leaders are putting AI to use today, creating real-world use cases that will transform tomorrow.


AI Data Drop: 3 Key Insights from Real-World Research on AI Usage — from microsoft.com; via Daniel Nest’s Why Try AI
One of the largest studies of Copilot usage—at nearly 60 companies—reveals how AI is changing the way we work.

  1. AI is starting to liberate people from email
  2. Meetings are becoming more about value creation
  3. People are co-creating more with AI—and with one another


*** Dharmesh has been working on creating agent.ai — a professional network for AI agents.***


Speaking of agents, also see:

Onboarding the AI workforce: How digital agents will redefine work itself — from venturebeat.com by Gary Grossman

AI in 2030: A transformative force

  1. AI agents are integral team members
  2. The emergence of digital humans
  3. AI-driven speech and conversational interfaces
  4. AI-enhanced decision-making and leadership
  5. Innovation and research powered by AI
  6. The changing nature of job roles and skills

AI Video Tools You Can Use Today — from heatherbcooper.substack.com by Heather Cooper
The latest AI video models that deliver results

AI video models are improving so quickly, I can barely keep up! I wrote about unreleased Adobe Firefly Video in the last issue, and we are no closer to public access to Sora.

No worries – we do have plenty of generative AI video tools we can use right now.

  • Kling AI launched its updated v1.5 and the quality of image or text to video is impressive.
  • Hailuo MiniMax text to video remains free to use for now, and it produces natural and photorealistic results (with watermarks).
  • Runway added the option to upload portrait aspect ratio images to generate vertical videos in Gen-3 Alpha & Turbo modes.
  • …plus several more

 

The Six AI Use Case Families of Instructional Design — from drphilippahardman.substack.com by Dr. Phillipa Harman
Pushing AI beyond content creation

So what are the six families? Here’s the TLDR:

  1. Creative Ideation, aka using AI to spark novel ideas and innovative design concepts.
  2. Research & Analysis, aka using AI to rapidly gather and synthesise information from vast sources.
  3. Data-Driven Insights, aka using AI to extract meaningful patterns and predictions from complex datasets.
  4. …and more

Town Hall: Back to School with AI — from gettingsmart.com

Key Points

  • AI can help educators focus more on human interaction and critical thinking by automating tasks that consume time but don’t require human empathy or creativity.
  • Encouraging students to use AI as a tool for learning and creativity can significantly boost their engagement and self-confidence, as seen in examples from student experiences shared in the discussion.

The speakers discuss various aspects of AI, including its potential to augment human intelligence and the need to focus on uniquely human competencies in the face of technological advancements. They also emphasize the significance of student agency, with examples of student-led initiatives and feedback sessions that reveal how young learners are already engaging with AI in innovative ways. The episode underscores the necessity for educators and administrators to stay informed and actively participate in the ongoing dialogue about AI to ensure its effective and equitable implementation in schools.


The video below is from The Artifice of Twinning by Marc Watkins


How AI Knocks Down Classroom Barriers — from gettingsmart.com by Alyssa Faubion

Key Points

  • AI can be a powerful tool to break down language, interest, and accessibility barriers in the classroom, making learning more inclusive and engaging.
  • Incorporating AI tools in educational settings can help build essential skills that AI can’t replace, such as creativity and problem-solving, preparing students for future job markets.

 

When A.I.’s Output Is a Threat to A.I. Itself — from nytimes.com by Aatish Bhatia
As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.

All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.

In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.


Per The Rundown AI:

The Rundown: Elon Musk’s xAI just launched “Colossus“, the world’s most powerful AI cluster powered by a whopping 100,000 Nvidia H100 GPUs, which was built in just 122 days and is planned to double in size soon.

Why it matters: xAI’s Grok 2 recently caught up to OpenAI’s GPT-4 in record time, and was trained on only around 15,000 GPUs. With now more than six times that amount in production, the xAI team and future versions of Grok are going to put a significant amount of pressure on OpenAI, Google, and others to deliver.


Google Meet’s automatic AI note-taking is here — from theverge.com by Joanna Nelius
Starting [on 8/28/24], some Google Workspace customers can have Google Meet be their personal note-taker.

Google Meet’s newest AI-powered feature, “take notes for me,” has started rolling out today to Google Workspace customers with the Gemini Enterprise, Gemini Education Premium, or AI Meetings & Messaging add-ons. It’s similar to Meet’s transcription tool, only instead of automatically transcribing what everyone says, it summarizes what everyone talked about. Google first announced this feature at its 2023 Cloud Next conference.


The World’s Call Center Capital Is Gripped by AI Fever — and Fear — from bloomberg.com by Saritha Rai [behind a paywall]
The experiences of staff in the Philippines’ outsourcing industry are a preview of the challenges and choices coming soon to white-collar workers around the globe.


[Claude] Artifacts are now generally available — from anthropic.com

[On 8/27/24], we’re making Artifacts available for all Claude.ai users across our Free, Pro, and Team plans. And now, you can create and view Artifacts on our iOS and Android apps.

Artifacts turn conversations with Claude into a more creative and collaborative experience. With Artifacts, you have a dedicated window to instantly see, iterate, and build on the work you create with Claude. Since launching as a feature preview in June, users have created tens of millions of Artifacts.


MIT's AI Risk Repository -- a comprehensive database of risks from AI systems

What are the risks from Artificial Intelligence?
A comprehensive living database of over 700 AI risks categorized by their cause and risk domain.

What is the AI Risk Repository?
The AI Risk Repository has three parts:

  • The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
  • The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
  • The Domain Taxonomy of AI Risks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”).

California lawmakers approve legislation to ban deepfakes, protect workers and regulate AI — from newsday.com by The Associated Press

SACRAMENTO, Calif. — California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.

Per Oncely:

The Details:

  • Combatting Deepfakes: New laws to restrict election-related deepfakes and deepfake pornography, especially of minors, requiring social media to remove such content promptly.
  • Setting Safety Guardrails: California is poised to set comprehensive safety standards for AI, including transparency in AI model training and pre-emptive safety protocols.
  • Protecting Workers: Legislation to prevent the replacement of workers, like voice actors and call center employees, with AI technologies.

New in Gemini: Custom Gems and improved image generation with Imagen 3 — from blog.google
The ability to create custom Gems is coming to Gemini Advanced subscribers, and updated image generation capabilities with our latest Imagen 3 model are coming to everyone.

We have new features rolling out, [that started on 8/28/24], that we previewed at Google I/O. Gems, a new feature that lets you customize Gemini to create your own personal AI experts on any topic you want, are now available for Gemini Advanced, Business and Enterprise users. And our new image generation model, Imagen 3, will be rolling out across Gemini, Gemini Advanced, Business and Enterprise in the coming days.


Cut the Chatter, Here Comes Agentic AI — from trendmicro.com

Major AI players caught heat in August over big bills and weak returns on AI investments, but it would be premature to think AI has failed to deliver. The real question is what’s next, and if industry buzz and pop-sci pontification hold any clues, the answer isn’t “more chatbots”, it’s agentic AI.

Agentic AI transforms the user experience from application-oriented information synthesis to goal-oriented problem solving. It’s what people have always thought AI would do—and while it’s not here yet, its horizon is getting closer every day.

In this issue of AI Pulse, we take a deep dive into agentic AI, what’s required to make it a reality, and how to prevent ‘self-thinking’ AI agents from potentially going rogue.

Citing AWS guidance, ZDNET counts six different potential types of AI agents:

    • Simple reflex agents for tasks like resetting passwords
    • Model-based reflex agents for pro vs. con decision making
    • Goal-/rule-based agents that compare options and select the most efficient pathways
    • Utility-based agents that compare for value
    • Learning agents
    • Hierarchical agents that manage and assign subtasks to other agents

Ask Claude: Amazon turns to Anthropic’s AI for Alexa revamp — from reuters.com by Greg Bensinger

Summary:

  • Amazon developing new version of Alexa with generative AI
  • Retailer hopes to generate revenue by charging for its use
  • Concerns about in-house AI prompt Amazon to turn to Anthropic’s Claude, sources say
  • Amazon says it uses many different technologies to power Alexa

Alibaba releases new AI model Qwen2-VL that can analyze videos more than 20 minutes long — from venturebeat.com by Carl Franzen


Hobbyists discover how to insert custom fonts into AI-generated images — from arstechnica.com by Benj Edwards
Like adding custom art styles or characters, in-world typefaces come to Flux.


200 million people use ChatGPT every week – up from 100 million last fall, says OpenAI — from zdnet.com by Sabrina Ortiz
Nearly two years after launching, ChatGPT continues to draw new users. Here’s why.

 

8 Legal Tech Trends Transforming Practice in 2024 — from lawyer-monthly.com

Thanks to rapid advances in technology, the entire scenario within the legal landscape is changing fast. Fast forward to 2024, and legal tech integration would be the lifeblood of any law firm or legal department if it wishes to stay within the competitive fray.

Innovations such as AI-driven tools for research to blockchain-enabled contracts are thus not only guideline highlights of legal work today. Understanding and embracing these trends will be vital to surviving and thriving in law as the revolution gains momentum and the sands of the world of legal practice continue to shift.

Below are the eight expected trends in legal tech defining the future legal practice.


Building your legal practice’s AI future: Understanding the actual technologies — from thomsonreuters.com by
The implementation of a successful AI strategy for a law firm depends not only on having the right people, but also understanding the tech and how to make it work for the firm

While we’re not delving deep here into how generative artificial intelligence (GenAI) and large language models (LLMs) work, we will talk generally about different categories of tech and emerging GenAI functionalities that are specific for legal.


Ex-Microsoft engineers raise $25M for legal tech startup that uses AI to help lawyers analyze data — from geekwire.com by Taylor Soper

Supio, a Seattle startup founded in 2021 by longtime friends and former Microsoft engineers, raised a $25 million Series A investment to supercharge its software platform designed to help lawyers quickly sort, search, and organize case-related data.

Supio focuses on cases related to personal injury and mass tort plaintiff law (when many plaintiffs file a claim). It specializes in organizing unstructured data and letting lawyers use a chatbot to pull relevant information.

“Most lawyers are data-rich and time-starved, but Supio automates time-sapping manual processes and empowers them to identify critical information to prove and expedite their cases,” Supio CEO and co-founder Jerry Zhou said in a statement.


ILTACON 2024: Large law firms are moving carefully but always forward with their GenAI strategy — from thomsonreuters.com by Zach Warren

NASHVILLE — As the world approaches the two-year mark since the original introduction of OpenAI’s ChatGPT, law firms already have made in-roads into establishing generative artificial intelligence (GenAI) as a part of their firms. Whether for document and correspondence drafting, summarization of meetings and contracts, legal research, or for back-office capabilities, firms have been playing around with a number of use cases to see where the technology may fit into the future.


Thomson Reuters acquires pre-revenue legal LLM developer Safe Sign Technologies – Here’s why — from legaltechnology.com by Caroline Hill

Thomson Reuters announced (on August 21) it has made the somewhat unusual acquisition of UK pre-revenue startup Safe Sign Technologies (SST), which is developing legal-specific large language models (LLMs) and as of just eight months ago was operating in stealth mode.

There isn’t an awful lot of public information available about the company but speaking to Legal IT Insider about the acquisition, Hron explained that SST is focused in part on deep learning research as it pertains to training large language models and specifically legal large language models. The company as yet has no customers and has been focusing exclusively on developing the technology and the models.


Supio brings generative AI to personal injury cases — from techcrunch.com by Kyle Wiggers

Legal work is incredibly labor- and time-intensive, requiring piecing together cases from vast amounts of evidence. That’s driving some firms to pilot AI to streamline certain steps; according to a 2023 survey by the American Bar Association, 35% of law firms now use AI tools in their practice.

OpenAI-backed Harvey is among the big winners so far in the burgeoning AI legal tech space, alongside startups such as Leya and Klarity. But there’s room for one more, says Jerry Zhou and Kyle Lam, the co-founders of an AI platform for personal injury law called Supio, which emerged from stealth Tuesday with a $25 million investment led by Sapphire Ventures.

Supio uses generative AI to automate bulk data collection and aggregation for legal teams. In addition to summarizing info, the platform can organize and identify files — and snippets within files — that might be useful in outlining, drafting and presenting a case, Zhou said.


 

Free Sites for Back to School — from techlearning.com by Diana Restifo
Top free and freemium sites for learning

An internet search for free learning resources will likely return a long list that includes some useful sites amid a sea of not-really-free and not-very-useful sites.

To help teachers more easily find the best free and freemium sites they can use in their classrooms and curricula, I’ve curated a list that describes the top free/freemium sites for learning.

In some cases, Tech & Learning has reviewed the site in detail, and those links are included so readers can find out more about how to make the best use of the online materials. In all cases, the websites below provide valuable educational tools, lessons, and ideas, and are worth exploring further.


Two bonus postings here! 🙂 

 

Daniel Christian: My slides for the Educational Technology Organization of Michigan’s Spring 2024 Retreat

From DSC:
Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.

Topics/agenda:

  • Topics & resources re: Artificial Intelligence (AI)
    • Top multimodal players
    • Resources for learning about AI
    • Applications of AI
    • My predictions re: AI
  • The powerful impact of pursuing a vision
  • A potential, future next-gen learning platform
  • Share some lessons from my past with pertinent questions for you all now
  • The significant impact of an organization’s culture
  • Bonus material: Some people to follow re: learning science and edtech

 

Education Technology Organization of Michigan -- ETOM -- Spring 2024 Retreat on June 6-7

PowerPoint slides of Daniel Christian's presentation at ETOM

Slides of the presentation (.PPTX)
Slides of the presentation (.PDF)

 


Plus several more slides re: this vision.

 

Microsoft teams with Khan Academy to make its AI tutor free for K-12 educators and will develop a Phi-3 math model — from venturebeat.com by Ken Yeung

Microsoft is partnering with Khan Academy in a multifaceted deal to demonstrate how AI can transform the way we learn. The cornerstone of today’s announcement centers on Khan Academy’s Khanmigo AI agent. Microsoft says it will migrate the bot to its Azure OpenAI Service, enabling the nonprofit educational organization to provide all U.S. K-12 educators free access to Khanmigo.

In addition, Microsoft plans to use its Phi-3 model to help Khan Academy improve math tutoring and collaborate to generate more high-quality learning content while making more courses available within Microsoft Copilot and Microsoft Teams for Education.


One-Third of Teachers Have Already Tried AI, Survey Finds — from the74million.org by Kevin Mahnken
A RAND poll released last month finds English and social studies teachers embracing tools like ChatGPT.

One in three American teachers have used artificial intelligence tools in their teaching at least once, with English and social studies teachers leading the way, according to a RAND Corporation survey released last month. While the new technology isn’t yet transforming how kids learn, both teachers and district leaders expect that it will become an increasingly common feature of school life.


Professors Try ‘Restrained AI’ Approach to Help Teach Writing — from edsurge.com by Jeffrey R. Young
Can ChatGPT make human writing more efficient, or is writing an inherently time-consuming process best handled without AI tools?

This article is part of the guide: For Education, ChatGPT Holds Promise — and Creates Problems.

When ChatGPT emerged a year and half ago, many professors immediately worried that their students would use it as a substitute for doing their own written assignments — that they’d click a button on a chatbot instead of doing the thinking involved in responding to an essay prompt themselves.

But two English professors at Carnegie Mellon University had a different first reaction: They saw in this new technology a way to show students how to improve their writing skills.

“They start really polishing way too early,” Kaufer says. “And so what we’re trying to do is with AI, now you have a tool to rapidly prototype your language when you are prototyping the quality of your thinking.”

He says the concept is based on writing research from the 1980s that shows that experienced writers spend about 80 percent of their early writing time thinking about whole-text plans and organization and not about sentences.


On Building AI Models for Education — from aieducation.substack.com by Claire Zau
Google’s LearnLM, Khan Academy/MSFT’s Phi-3 Models, and OpenAI’s ChatGPT Edu

This piece primarily breaks down how Google’s LearnLM was built, and takes a quick look at Microsoft/Khan Academy’s Phi-3 and OpenAI’s ChatGPT Edu as alternative approaches to building an “education model” (not necessarily a new model in the latter case, but we’ll explain). Thanks to the public release of their 86-page research paper, we have the most comprehensive view into LearnLM. Our understanding of Microsoft/Khan Academy small language models and ChatGPT Edu is limited to the information provided through announcements, leaving us with less “under the hood” visibility into their development.


AI tutors are quietly changing how kids in the US study, and the leading apps are from China — from techcrunch.com by Rita Liao

Answer AI is among a handful of popular apps that are leveraging the advent of ChatGPT and other large language models to help students with everything from writing history papers to solving physics problems. Of the top 20 education apps in the U.S. App Store, five are AI agents that help students with their school assignments, including Answer AI, according to data from Data.ai on May 21.


Is your school behind on AI? If so, there are practical steps you can take for the next 12 months — from stefanbauschard.substack.com by Stefan Bauschard

If your school (district) or university has not yet made significant efforts to think about how you will prepare your students for a World of AI, I suggest the following steps:

July 24 – Administrator PD & AI Guidance
In July, administrators should receive professional development on AI, if they haven’t already. This should include…

August 24 –Professional Development for Teachers and Staff…
Fall 24 — Parents; Co-curricular; Classroom experiments…
December 24 — Revision to Policy…


New ChatGPT Version Aiming at Higher Ed — from insidehighered.com by Lauren Coffey
ChatGPT Edu, emerging after initial partnerships with several universities, is prompting both cautious optimism and worries.

OpenAI unveiled a new version of ChatGPT focused on universities on Thursday, building on work with a handful of higher education institutions that partnered with the tech giant.

The ChatGPT Edu product, expected to start rolling out this summer, is a platform for institutions intended to give students free access. OpenAI said the artificial intelligence (AI) toolset could be used for an array of education applications, including tutoring, writing grant applications and reviewing résumés.

 

Introducing ChatGPT Edu — from openai.com
An affordable offering for universities to responsibly bring AI to campus.

We’re announcing ChatGPT Edu, a version of ChatGPT built for universities to responsibly deploy AI to students, faculty, researchers, and campus operations. Powered by GPT-4o, ChatGPT Edu can reason across text and vision and use advanced tools such as data analysis. This new offering includes enterprise-level security and controls and is affordable for educational institutions.

We built ChatGPT Edu because we saw the success universities like the University of Oxford, Wharton School of the University of Pennsylvania(opens in a new window), University of Texas at Austin, Arizona State University(opens in a new window), and Columbia University in the City of New York were having with ChatGPT Enterprise.

ChatGPT can help with various tasks across campus, such as providing personalized tutoring for students and reviewing their resumes, helping researchers write grant applications, and assisting faculty with grading and feedback. 


Claude can now use tools — from anthropic.com

Excerpt (emphasis DSC):

Tool use, which enables Claude to interact with external tools and APIs, is now generally available across the entire Claude 3 model family on the Anthropic Messages API, Amazon Bedrock, and Google Cloud’s Vertex AI. With tool use, Claude can perform tasks, manipulate data, and provide more dynamic—and accurate—responses.

Define a toolset for Claude and specify your request in natural language. Claude will then select the appropriate tool to fulfill the task and, when appropriate, execute the corresponding action:

  • Extract structured data from unstructured text…
  • Convert natural language requests into structured API calls…
  • Answer questions by searching databases or using web APIs…
  • Automate simple tasks through software APIs…
  • Orchestrate multiple fast Claude subagents for granular tasks…

From DSC:
The above posting reminds me of this other posting…as AGENTS are likely going to become much more popular and part of our repertoire:

Forget Chatbots. AI Agents Are the Future — from wired.com by Will Knight
Startups and tech giants are trying to move from chatbots that offer help via text, to AI agents that can get stuff done. Recent demos include an AI coder called Devin and agents that play videogames.

Devin is just the latest, most polished example of a trend I’ve been tracking for a while—the emergence of AI agents that instead of just providing answers or advice about a problem presented by a human can take action to solve it. A few months back I test drove Auto-GPT, an open source program that attempts to do useful chores by taking actions on a person’s computer and on the web. Recently I tested another program called vimGPT to see how the visual skills of new AI models can help these agents browse the web more efficiently.

 


Looking Back on My AI Blog One Year In: AI Unfolding as Predicted — from stefanbauschard.substack.com Stefan Bauschard

On May 30, 2023, I’ve started blogging about AI, and, so far, I think things have been unfolding as predicted.

Topics included:

  • AGI
  • It’s not just another piece of Edtech
  • AI Literacy
  • Bot Teachers/tutors
  • AI Writing Detectors
  • AI Use in the Classroom is Uncontrollable
  • …and more

 

 

Nvidia Earnings: Stock Rallies As AI Giant Reports 600% Profit Explosion, 10-For-1 Stock Split — from forbes.com by Derek Saul

  • Nvidia reported $6.12 earnings per share and $26 billion of sales for the three-month period ending April 30, shattering mean analyst forecasts of $5.60 and $24.59 billion, according to FactSet.
  • Nvidia’s profits and revenues skyrocketed by 628% and 268% compared to 2023’s comparable period, respectively.
  • This was Nvidia’s most profitable and highest sales quarter ever, topping the quarter ending this January’s record $12.3 billion net income and $22.1 billion revenue.
  • Driving the numerous superlatives for Nvidia’s financial growth over the last year is unsurprisingly its AI-intensive datacenter division, which raked in $22.6 billion of revenue last quarter, a 427% year-over-year increase and a whopping 20 times higher than the $1.1 billion the segment brought in in 2020.

Per ChatPGT today:

NVIDIA is a prominent technology company known for its contributions to various fields, primarily focusing on graphics processing units (GPUs) and artificial intelligence (AI). Here’s an overview of NVIDIA’s main areas of activity:

1. **Graphics Processing Units (GPUs):**
– **Consumer GPUs:** NVIDIA is famous for its GeForce series of GPUs, which are widely used in gaming and personal computing for their high performance and visual capabilities.
– **Professional GPUs:** NVIDIA’s Quadro series is designed for professional applications like 3D modeling, CAD (Computer-Aided Design), and video editing.

2. **Artificial Intelligence (AI) and Machine Learning:**
– NVIDIA GPUs are extensively used in AI research and development. They provide the computational power needed for training deep learning models.
– The company offers specialized hardware for AI, such as the NVIDIA Tesla and A100 GPUs, which are used in data centers and supercomputing environments.

3. **Data Centers:**
– NVIDIA develops high-performance computing solutions for data centers, including GPU-accelerated servers and AI platforms. These products are essential for tasks like big data analytics, scientific simulations, and AI workloads.

4. **Autonomous Vehicles:**
– Through its DRIVE platform, NVIDIA provides hardware and software solutions for developing autonomous vehicles. This includes AI-based systems for perception, navigation, and decision-making.

5. **Edge Computing:**
– NVIDIA’s Jetson platform caters to edge computing, enabling AI-powered devices and applications to process data locally rather than relying on centralized data centers.

6. **Gaming and Entertainment:**
– Beyond GPUs, NVIDIA offers technologies like G-SYNC (for smoother gaming experiences) and NVIDIA GameWorks (a suite of tools for game developers).

7. **Healthcare:**
– NVIDIA’s Clara platform utilizes AI and GPU computing to advance medical imaging, genomics, and other healthcare applications.

8. **Omniverse:**
– NVIDIA Omniverse is a real-time graphics collaboration platform for 3D production pipelines. It’s designed for industries like animation, simulation, and visualization.

9. **Crypto Mining:**
– NVIDIA GPUs are also popular in the cryptocurrency mining community, although the company has developed specific products like the NVIDIA CMP (Cryptocurrency Mining Processor) to cater to this market without impacting the availability of GPUs for gamers and other users.

Overall, NVIDIA’s influence spans a broad range of industries, driven by its innovations in GPU technology and AI advancements.

 


Microsoft’s new ChatGPT competitor… — from The Rundown AI

The Rundown: Microsoft is reportedly developing a massive 500B parameter in-house LLM called MAI-1, aiming to compete with top AI models from OpenAI, Anthropic, and Google.


2024 | The AI Founder Report | Business Impact, Use cases, & Tools — from Hampton; via The Neuron

Hampton runs a private community for high-growth tech founders and CEOs. We asked our community of founders and owners how AI has impacted their business and what tools they use

Here’s a sneak peek of what’s inside:

  • The budgets they set aside for AI research and development
  • The most common (and obscure) tools founders are using
  • Measurable business impacts founders have seen through using AI
  • Where they are purposefully not using AI and much more

2024 Work Trend Index Annual Report from Microsoft and LinkedIn
AI at Work Is Here. Now Comes the Hard Part Employees want AI, leaders are looking for a path forward.

Also relevant, see Microsoft’s web page on this effort:

To help leaders and organizations overcome AI inertia, Microsoft and LinkedIn looked at how AI will reshape work and the labor market broadly, surveying 31,000 people across 31 countries, identifying labor and hiring trends from LinkedIn, and analyzing trillions of Microsoft 365 productivity signals as well as research with Fortune 500 customers. The data points to insights every leader and professional needs to know—and actions they can take—when it comes to AI’s implications for work.

 

AI for the physical world — from superhuman.ai by Zain Kahn

Excerpt: (emphasis DSC)

A new company called Archetype is trying to tackle that problem: It wants to make AI useful for more than just interacting with and understanding the digital realm. The startup just unveiled Newton — “the first foundation model that understands the physical world.”

What’s it for?
A warehouse or factory might have 100 different sensors that have to be analyzed separately to figure out whether the entire system is working as intended. Newton can understand and interpret all of the sensors at the same time, giving a better overview of how everything’s working together. Another benefit: You can ask Newton questions in plain English without needing much technical expertise.

How does it work?

  • Newton collects data from radar, motion sensors, and chemical and environmental trackers
  • It uses an LLM to combine each of those data streams into a cohesive package
  • It translates that data into text, visualizations, or code so it’s easy to understand

Apple’s $25-50 million Shutterstock deal highlights fierce competition for AI training data — from venturebeat.com by Michael Nuñez; via Tom Barrett’s Prompcraft e-newsletter

Apple has entered into a significant agreement with stock photography provider Shutterstock to license millions of images for training its artificial intelligence models. According to a Reuters report, the deal is estimated to be worth between $25 million and $50 million, placing Apple among several tech giants racing to secure vast troves of data to power their AI systems.


 

 

Shocking AI Statistics in 2023 — from techthatmatters.beehiiv.com by Harsh Makadia

  1. Chat GPT reached 100 million users faster than any other app. By February 2023, the chat.openai.com website saw an average of 25 million daily visitors. How can this rise in AI usage benefit your business’s function?
  2. 45% of executives say the popularity of ChatGPT has led them to increase investment in AI. If executives are investing in AI personally, then how will their beliefs affect corporate investment in AI to drive automation further? Also, how will this affect the amount of workers hired to manage AI systems within companies?
  3. eMarketer predicts that in 2024 at least 20% of Americans will use ChatGPT monthly and that a fifth of them are 25-34 year olds in the workforce. Does this mean that there are more young workers using AI?
  4. …plus 10 more stats

People are speaking with ChatGPT for hours, bringing 2013’s Her closer to reality — from arstechnica.com by Benj Edwards
Long mobile conversations with the AI assistant using AirPods echo the sci-fi film.

It turns out that Willison’s experience is far from unique. Others have been spending hours talking to ChatGPT using its voice recognition and voice synthesis features, sometimes through car connections. The realistic nature of the voice interaction feels largely effortless, but it’s not flawless. Sometimes, it has trouble in noisy environments, and there can be a pause between statements. But the way the ChatGPT voices simulate vocal ticks and noises feels very human. “I’ve been using the voice function since yesterday and noticed that it makes breathing sounds when it speaks,” said one Reddit user. “It takes a deep breath before starting a sentence. And today, actually a minute ago, it coughed between words while answering my questions.”

From DSC:
Hmmmmmmm….I’m not liking the sound of this on my initial take of it. But perhaps there are some real positives to this. I need to keep an open mind.


Working with AI: Two paths to prompting — from oneusefulthing.org by Ethan Mollick
Don’t overcomplicate things

  1. Conversational Prompting [From DSC: i.e., keep it simple]
  2. Structured Prompting

For most people, [Conversational Prompting] is good enough to get started, and it is the technique I use most of the time when working with AI. Don’t overcomplicate things, just interact with the system and see what happens. After you have some experience, however, you may decide that you want to create prompts you can share with others, prompts that incorporate your expertise. We call this approach Structured Prompting, and, while improving AIs may make it irrelevant soon, it is currently a useful tool for helping others by encoding your knowledge into a prompt that anyone can use.


These fake images reveal how AI amplifies our worst stereotypes — from washingtonpost.com by Nitasha Tiku, Kevin Schaul, and Szu Yu Chen (behind paywall)
AI image generators like Stable Diffusion and DALL-E amplify bias in gender and race, despite efforts to detoxify the data fueling these results.

Artificial intelligence image tools have a tendency to spin up disturbing clichés: Asian women are hypersexual. Africans are primitive. Europeans are worldly. Leaders are men. Prisoners are Black.

These stereotypes don’t reflect the real world; they stem from the data that trains the technology. Grabbed from the internet, these troves can be toxic — rife with pornography, misogyny, violence and bigotry.

Abeba Birhane, senior advisor for AI accountability at the Mozilla Foundation, contends that the tools can be improved if companies work hard to improve the data — an outcome she considers unlikely. In the meantime, the impact of these stereotypes will fall most heavily on the same communities harmed during the social media era, she said, adding: “People at the margins of society are continually excluded.”


ChatGPT app revenue shows no signs of slowing, but some other AI apps top it — from techcrunch.com by Sarah Perez; Via AI Valley – Barsee

ChatGPT, the AI-powered chatbot from OpenAI, far outpaces all other AI chatbot apps on mobile devices in terms of downloads and is a market leader by revenue, as well. However, it’s surprisingly not the top AI app by revenue — several photo AI apps and even other AI chatbots are actually making more money than ChatGPT, despite the latter having become a household name for an AI chat experience.


ChatGPT can now analyze files you upload to it without a plugin — from bgr.com by Joshua Hawkins; via Superhuman

According to new reports, OpenAI has begun rolling out a more streamlined approach to how people use ChatGPT. The new system will allow the AI to choose a model automatically, letting you run Python code, open a web browser, or generate images with DALL-E without extra interaction. Additionally, ChatGPT will now let you upload and analyze files.

 

Student Use Cases for AI: Start by Sharing These Guidelines with Your Class — from hbsp.harvard.edu by Ethan Mollick and Lilach Mollick

To help you explore some of the ways students can use this disruptive new technology to improve their learning—while making your job easier and more effective—we’ve written a series of articles that examine the following student use cases:

  1. AI as feedback generator
  2. AI as personal tutor
  3. AI as team coach
  4. AI as learner

Recap: Teaching in the Age of AI (What’s Working, What’s Not) — from celt.olemiss.edu by Derek Bruff, visiting associate director

Earlier this week, CETL and AIG hosted a discussion among UM faculty and other instructors about teaching and AI this fall semester. We wanted to know what was working when it came to policies and assignments that responded to generative AI technologies like ChatGPT, Google Bard, Midjourney, DALL-E, and more. We were also interested in hearing what wasn’t working, as well as questions and concerns that the university community had about teaching and AI.


Teaching: Want your students to be skeptical of ChatGPT? Try this. — from chronicle.com by Beth McMurtrie

Then, in class he put them into groups where they worked together to generate a 500-word essay on “Why I Write” entirely through ChatGPT. Each group had complete freedom in how they chose to use the tool. The key: They were asked to evaluate their essay on how well it offered a personal perspective and demonstrated a critical reading of the piece. Weiss also graded each ChatGPT-written essay and included an explanation of why he came up with that particular grade.

After that, the students were asked to record their observations on the experiment on the discussion board. Then they came together again as a class to discuss the experiment.

Weiss shared some of his students’ comments with me (with their approval). Here are a few:


2023 EDUCAUSE Horizon Action Plan: Generative AI — from library.educause.edu by Jenay Robert and Nicole Muscanell

Asked to describe the state of generative AI that they would like to see in higher education 10 years from now, panelists collaboratively constructed their preferred future.
.

2023-educause-horizon-action-plan-generative-ai


Will Teachers Listen to Feedback From AI? Researchers Are Betting on It — from edsurge.com by Olina Banerji

Julie York, a computer science and media teacher at South Portland High School in Maine, was scouring the internet for discussion tools for her class when she found TeachFX. An AI tool that takes recorded audio from a classroom and turns it into data about who talked and for how long, it seemed like a cool way for York to discuss issues of data privacy, consent and bias with her students. But York soon realized that TeachFX was meant for much more.

York found that TeachFX listened to her very carefully, and generated a detailed feedback report on her specific teaching style. York was hooked, in part because she says her school administration simply doesn’t have the time to observe teachers while tending to several other pressing concerns.

“I rarely ever get feedback on my teaching style. This was giving me 100 percent quantifiable data on how many questions I asked and how often I asked them in a 90-minute class,” York says. “It’s not a rubric. It’s a reflection.”

TeachFX is easy to use, York says. It’s as simple as switching on a recording device.

But TeachFX, she adds, is focused not on her students’ achievements, but instead on her performance as a teacher.


ChatGPT Is Landing Kids in the Principal’s Office, Survey Finds — from the74million.org by Mark Keierleber
While educators worry that students are using generative AI to cheat, a new report finds students are turning to the tool more for personal problems.

Indeed, 58% of students, and 72% of those in special education, said they’ve used generative AI during the 2022-23 academic year, just not primarily for the reasons that teachers fear most. Among youth who completed the nationally representative survey, just 23% said they used it for academic purposes and 19% said they’ve used the tools to help them write and submit a paper. Instead, 29% reported having used it to deal with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.

Part of the disconnect dividing teachers and students, researchers found, may come down to gray areas. Just 40% of parents said they or their child were given guidance on ways they can use generative AI without running afoul of school rules. Only 24% of teachers say they’ve been trained on how to respond if they suspect a student used generative AI to cheat.


Embracing weirdness: What it means to use AI as a (writing) tool — from oneusefulthing.org by Ethan Mollick
AI is strange. We need to learn to use it.

But LLMs are not Google replacements, or thesauruses or grammar checkers. Instead, they are capable of so much more weird and useful help.


Diving Deep into AI: Navigating the L&D Landscape — from learningguild.com by Markus Bernhardt

The prospect of AI-powered, tailored, on-demand learning and performance support is exhilarating: It starts with traditional digital learning made into fully adaptive learning experiences, which would adjust to strengths and weaknesses for each individual learner. The possibilities extend all the way through to simulations and augmented reality, an environment to put into practice knowledge and skills, whether as individuals or working in a team simulation. The possibilities are immense.

Thanks to generative AI, such visions are transitioning from fiction to reality.


Video: Unleashing the Power of AI in L&D — from drphilippahardman.substack.com by Dr. Philippa Hardman
An exclusive video walkthrough of my keynote at Sweden’s national L&D conference this week

Highlights

  • The wicked problem of L&D: last year, $371 billion was spent on workplace training globally, but only 12% of employees apply what they learn in the workplace
  • An innovative approach to L&D: when Mastery Learning is used to design & deliver workplace training, the rate of “transfer” (i.e. behaviour change & application) is 67%
  • AI 101: quick summary of classification, generative and interactive AI and its uses in L&D
  • The impact of AI: my initial research shows that AI has the potential to scale Mastery Learning and, in the process:
    • reduce the “time to training design” by 94% > faster
    • reduce the cost of training design by 92% > cheaper
    • increase the quality of learning design & delivery by 96% > better
  • Research also shows that the vast majority of workplaces are using AI only to “oil the machine” rather than innovate and improve our processes & practices
  • Practical tips: how to get started on your AI journey in your company, and a glimpse of what L&D roles might look like in a post-AI world

 

The 2023 Global Sentiment Survey — from donaldhtaylor.co.uk by Don Taylor

Excerpt:

This year’s Global Sentiment Survey – the tenth – paints a picture that is both familiar and unusual. In our 2020 survey report, we noted that ‘Data dominates this year’s survey’. It does so again this year, with the near 4,000 respondents showing a strong interest in AI, Skills-based talent management and Learning analytics (in positions #2, #3 and #4), all of which rely on data. The table is topped by Reskilling/upskilling, in the #1 spot for the third year running.
.

Donald Taylor's GSS 2023

.

Also see Don’s report here >>

 

 

35 Ways Real People Are Using A.I. Right Now — from nytimes.com by Francesca Paris and Larry Buchanan

From DSC:
It was interesting to see how people are using AI these days. The article mentioned things from planning Gluten Free (GF) meals to planning gardens, workouts, and more. Faculty members, staff, students, researchers and educators in general may find Elicit, Scholarcy and Scite to be useful tools. I put in a question at Elicit and it looks interesting. I like their interface, which allows me to quickly resort things.
.

Snapshot of a query result from a tool called Elicit


 

There Is No A.I. — from newyorker.com by Jaron Lanier
There are ways of controlling the new technology—but first we have to stop mythologizing it.

Excerpts:

If the new tech isn’t true artificial intelligence, then what is it? In my view, the most accurate way to understand what we are building today is as an innovative form of social collaboration.

The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. This is a significant achievement and worth celebrating—but it can be thought of as illuminating previously hidden concordances between human creations, rather than as the invention of a new mind.

 


 

Resource per Steve Nouri on LinkedIn


 
© 2024 | Daniel Christian