The Prompt #14: Your Guide to Custom Instructions — from noisemedia.ai by Alex Banks

Whilst we typically cover a single ‘prompt’ to use with ChatGPT, today we’re exploring a new feature now available to everyone: custom instructions.

You provide specific directions for ChatGPT leading to greater control of the output. It’s all about guiding the AI to get the responses you really want.

To get started:
Log into ChatGPT ? Click on your name/email bottom left corner ? select ‘Custom instructions’


Meet Zoom AI Companion, your new AI assistant! Unlock the benefits with a paid Zoom account — from blog.zoom.us by Smita Hashim

We’re excited to introduce you to AI Companion (formerly Zoom IQ), your new generative AI assistant across the Zoom platform. AI Companion empowers individuals by helping them be more productive, connect and collaborate with teammates, and improve their skills.

Envision being able to interact with AI Companion through a conversational interface and ask for help on a whole range of tasks, similarly to how you would with a real assistant. You’ll be able to ask it to help prepare for your upcoming meeting, get a consolidated summary of prior Zoom meetings and relevant chat threads, and even find relevant documents and tickets from connected third-party applications with your permission.

From DSC:
You can ask AI Companion to catch you up on what you missed during a meeting in progress.”

And what if some key details were missed? Should you rely on this? I’d treat this with care/caution myself.



A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data — from fortune.com by Stephen Pastis (behind paywall)

That’s because, as it turns out, it’s nearly impossible to remove a user’s data from a trained A.I. model without resetting the model and forfeiting the extensive money and effort put into training it. To use a human analogy, once an A.I. has “seen” something, there is no easy way to tell the model to “forget” what it saw. And deleting the model entirely is also surprisingly difficult.

This represents one of the thorniest, unresolved, challenges of our incipient artificial intelligence era, alongside issues like A.I. “hallucinations” and the difficulties of explaining certain A.I. outputs. 


More companies see ChatGPT training as a hot job perk for office workers — from cnbc.com by Mikaela Cohen

Key points:

  • Workplaces filled with artificial intelligence are closer to becoming a reality, making it essential that workers know how to use generative AI.
  • Offering specific AI chatbot training to current employees could be your next best talent retention tactic.
  • 90% of business leaders see ChatGPT as a beneficial skill in job applicants, according to a report from career site Resume Builder.

OpenAI Plugs ChatGPT Into Canva to Sharpen Its Competitive Edge in AI — from decrypt.co by Jose Antonio Lanz
Now ChatGPT Plus users can “talk” to Canva directly from OpenAI’s bot, making their workflow easier.

This strategic move aims to make the process of creating visuals such as logos, banners, and more, even more simple for businesses and entrepreneurs.

This latest integration could improve the way users generate visuals by offering a streamlined and user-friendly approach to digital design.


From DSC:
This Tweet addresses a likely component of our future learning ecosystems:


Large language models aren’t people. Let’s stop testing them as if they were. — from technologyreview.com by Will Douglas Heaven
With hopes and fears about this technology running wild, it’s time to agree on what it can and can’t do.

That’s why a growing number of researchers—computer scientists, cognitive scientists, neuroscientists, linguists—want to overhaul the way they are assessed, calling for more rigorous and exhaustive evaluation. Some think that the practice of scoring machines on human tests is wrongheaded, period, and should be ditched.

“There’s a lot of anthropomorphizing going on,” she says. “And that’s kind of coloring the way that we think about these systems and how we test them.”

“There is a long history of developing methods to test the human mind,” says Laura Weidinger, a senior research scientist at Google DeepMind. “With large language models producing text that seems so human-like, it is tempting to assume that human psychology tests will be useful for evaluating them. But that’s not true: human psychology tests rely on many assumptions that may not hold for large language models.”


We Analyzed Millions of ChatGPT User Sessions: Visits are Down 29% since May, Programming Assistance is 30% of Use — from sparktoro.com by Rand Fishkin

In concert with the fine folks at Datos, whose opt-in, anonymized panel of 20M devices (desktop and mobile, covering 200+ countries) provides outstanding insight into what real people are doing on the web, we undertook a challenging project to answer at least some of the mystery surrounding ChatGPT.



Crypto in ‘arms race’ against AI-powered scams — Quantstamp co-founder — from cointelegraph.com by Tom Mitchelhill
Quantstamp’s Richard Ma explained that the coming surge in sophisticated AI phishing scams could pose an existential threat to crypto organizations.

With the field of artificial intelligence evolving at near breakneck speed, scammers now have access to tools that can help them execute highly sophisticated attacks en masse, warns the co-founder of Web3 security firm Quantstamp.


 

Why Christians need to support diversity professionals, not demonize them — from religionnews.com by Michelle Loyd-Paige
Even among Christians, DEI leaders find themselves isolated and unsupported.

For nearly 39 years, I have taught about and advocated for diversity, equity, inclusion, anti-racism and social justice in Christian contexts. I have been sustained by the knowledge that diversity is a part of God’s good creation and is celebrated in the Bible. 

And not just diversity, but love for our neighbors, care for the immigrant, and justice for the marginalized and oppressed. In fact, the Hebrew and Greek words for justice appear in Scripture more than 1,000 times. 

It could be argued that Jesus’ ministry on earth exemplified the value of diversity, the importance of inclusion and the obligation of justice and restoration. Our ministry — in schools, churches, business, wherever we find ourselves — should reflect the same.

From DSC:
I was at Calvin (then College) when Michelle was there. I am very grateful for her work over my 10+ years there. I learned many things from her and had my “lenses” refined several times due to her presentations, questions, and the media that she showed. Thank you Michelle for all of your work and up-hill efforts! It’s made a difference! It impacted the culture at Calvin. It impacted me.

The other thing that hepled me in my background was when my family moved to a much more diverse area. And I’ve tried to continue that perspective in my own family. I don’t know half of the languages that are spoken in our neighborhood, but I love the diversity there! I believe our kids (now mostly grown) have benefited from it and are better prepared for what they will encounter in the real world.

 

Future of Work Report AI at Work — from economicgraph.linkedin.com; via Superhuman

The intersection of AI and the world of work: Not only are job postings increasing, but we’re seeing more LinkedIn members around the globe adding AI skills to their profiles than ever before. We’ve seen a 21x increase in the share of global English-language job postings that mention new AI technologies such as GPT or ChatGPT since November 2022. In June 2023, the number of AI-skilled members was 9x larger than in January 2016, globally.

The state of play of Generative AI (GAI) in the workforce: GAI technologies, including ChatGPT, are poised to start to change the way we work. In fact, 47% of US executives believe that using generative AI will increase productivity, and 92% agree that people skills are more important than ever. This means jobs won’t necessarily go away but they will change as will the skills necessary to do them.

Also relevant/see:

The Working Future: More Human, Not Less — from bain.com
It’s time to change how we think about work

Contents

  • Introduction
  • Motivations for Work Are Changing.
  • Beliefs about What Makes a “Good Job” Are Diverging
  • Automation Is Helping to Rehumanize Work
  • Technological Change Is Blurring the Boundaries of the Firm
  • Young Workers Are Increasingly Overwhelmed
  • Rehumanizing Work: The Journey Ahead
 

Introductory comments from DSC:

Sometimes people and vendors write about AI’s capabilities in such a glowingly positive way. It seems like AI can do everything in the world. And while I appreciate the growing capabilities of Large Language Models (LLMs) and the like, there are some things I don’t want AI-driven apps to do.

For example, I get why AI can be helpful in correcting my misspellings, my grammatical errors, and the like. That said, I don’t want AI to write my emails for me. I want to write my own emails. I want to communicate what I want to communicate. I don’t want to outsource my communication. 

And what if an AI tool summarizes an email series in a way that I miss some key pieces of information? Hmmm…not good.

Ok, enough soapboxing. I’ll continue with some resources.


ChatGPT Enterprise

Introducing ChatGPT Enterprise — from openai.com
Get enterprise-grade security & privacy and the most powerful version of ChatGPT yet.

We’re launching ChatGPT Enterprise, which offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, customization options, and much more. We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive. Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.

Enterprise-grade security & privacy and the most powerful version of ChatGPT yet. — from openai.com


NVIDIA

Nvidia’s Q2 earnings prove it’s the big winner in the generative AI boom — from techcrunch.com by Kirsten Korosec

Nvidia Quarterly Earnings Report Q2 Smashes Expectations At $13.5B — from techbusinessnews.com.au
Nvidia’s quarterly earnings report (Q2) smashed expectations coming in at $13.5B more than doubling prior earnings of $6.7B. The chipmaker also projected October’s total revenue would peak at $16B


MISC

OpenAI Passes $1 Billion Revenue Pace as Big Companies Boost AI Spending — from theinformation.com by Amir Efrati and Aaron Holmes

OpenAI is currently on pace to generate more than $1 billion in revenue over the next 12 months from the sale of artificial intelligence software and the computing capacity that powers it. That’s far ahead of revenue projections the company previously shared with its shareholders, according to a person with direct knowledge of the situation.

OpenAI’s GPTBot blocked by major websites and publishers — from the-decoder.com by Matthias Bastian
An emerging chatbot ecosystem builds on existing web content and could displace traditional websites. At the same time, licensing and financing are largely unresolved.

OpenAI offers publishers and website operators an opt-out if they prefer not to make their content available to chatbots and AI models for free. This can be done by blocking OpenAI’s web crawler “GPTBot” via the robots.txt file. The bot collects content to improve future AI models, according to OpenAI.

Major media companies including the New York Times, CNN, Reuters, Chicago Tribune, ABC, and Australian Community Media (ACM) are now blocking GPTBot. Other web-based content providers such as Amazon, Wikihow, and Quora are also blocking the OpenAI crawler.

Introducing Code Llama, a state-of-the-art large language model for coding  — from ai.meta.com

Takeaways re: Code Llama:

  • Is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts.
  • Is free for research and commercial use.
  • Is built on top of Llama 2 and is available in three models…
  • In our own benchmark testing, Code Llama outperformed state-of-the-art publicly available LLMs on code tasks

Key Highlights of Google Cloud Next ‘23— from analyticsindiamag.com by Shritama Saha
Meta’s Llama 2, Anthropic’s Claude 2, and TII’s Falcon join Model Garden, expanding model variety.

AI finally beats humans at a real-life sport— drone racing — from nature.com by Dan Fox
The new system combines simulation with onboard sensing and computation.

From DSC:
This is scary — not at all comforting to me. Militaries around the world continue their jockeying to be the most dominant, powerful, and effective killers of humankind. That definitely includes the United States and China. But certainly others as well. And below is another alarming item, also pointing out the downsides of how we use technologies.

The Next Wave of Scams Will Be Deepfake Video Calls From Your Boss — from bloomberg.com by Margi Murphy; behind paywall

Cybercriminals are constantly searching for new ways to trick people. One of the more recent additions to their arsenal was voice simulation software.

10 Great Colleges For Studying Artificial Intelligence — from forbes.com by Sim Tumay

The debut of ChatGPT in November created angst for college admission officers and professors worried they would be flooded by student essays written with the undisclosed assistance of artificial intelligence. But the explosion of interest in AI has benefits for higher education, including a new generation of students interested in studying and working in the field. In response, universities are revising their curriculums to educate AI engineers.

 

Don’t Be Fooled: How You Can Master Media Literacy in the Digital Age — from youtube.com by Professor Sue Ellen Christian

During this special keynote presentation, Western Michigan University (WMU) professor Sue Ellen Christian speaks about the importance of media literacy for all ages and how we can help educate our friends and families about media literacy principles. Hosted by the Grand Rapids Public Library and GRTV, a program of the Grand Rapids Community Media Center. Special thanks to the Grand Rapids Public Library Foundation for their support of this program.

Excerpts:

Media Literacy is the ability to access, analyze, evaluate, and create media in a variety of forms. Center for Media Literacy

5 things to do when confronted with concerns about content.


Also relevant/see:

Kalamazoo Valley Museum’s newest exhibit teaches community about media literacy — from mlive.com by Gabi Broekema

 

From DSC:
Yesterday, I posted the item about Google’s NotebookLM research tool. Excerpt:

What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.

Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes.

That got me to thinking…

What if the presenter/teacher/professor/trainer/preacher provided a set of notes for the AI to compare to the readers’ notes? 

That way, the AI could see the discrepancies between what the presenter wanted their audience to learn/hear and what was actually being learned/heard. In a sort of digital Socratic Method, the AI could then generate some leading questions to get the audience member to check their thinking/understanding of the topic.

The end result would be that the main points were properly communicated/learned/received.

 

Google’s AI-powered note-taking app is the messy beginning of something great — from theverge.com by David Pierce; via AI Insider
NotebookLM is a neat research tool with some big ideas. It’s still rough and new, but it feels like Google is onto something.

Excerpts (emphasis DSC):

What if you could have a conversation with your notes? That question has consumed a corner of the internet recently, as companies like Dropbox, Box, Notion, and others have built generative AI tools that let you interact with and create new things from the data you already have in their systems.

Google’s version of this is called NotebookLM. It’s an AI-powered research tool that is meant to help you organize and interact with your own notes. 

Right now, it’s really just a prototype, but a small team inside the company has been trying to figure out what an AI notebook might look like.

 


ElevenLabs’ AI Voice Generator Can Now Fake Your Voice in 30 Languages — from gizmodo.com by Kyle Barr
ElevenLabs said its AI voice generator is out of beta, saying it would support video game and audiobook creators with cheap audio.

According to ElevenLabs, the new Multilingual v2 model promises it can produce “emotionally rich” audio in a total of 30 languages. The company offers two AI voice tools, one is a text-to-speech model and the other is the “VoiceLab” that lets paying users clone a voice by inputting fragments of theirs (or others) speech into the model to create a kind of voice cone. With the v2 model, users can get these generated voices to start speaking in Greek, Malay, or Turkish.

Since then, ElevenLabs claims its integrated new measures to ensure users can only clone their own voice. Users need to verify their speech with a text captcha prompt which is then compared to the original voice sample.

From DSC:
I don’t care what they say regarding safeguards/proof of identity/etc. This technology has been abused and will be abused in the future. We can count on it. The question now is, how do we deal with it?



Google, Amazon, Nvidia and other tech giants invest in AI startup Hugging Face, sending its valuation to $4.5 billion — from cnbc.com by Kif Leswing

But Hugging Face produces a platform where AI developers can share code, models, data sets, and use the company’s developer tools to get open-source artificial intelligence models running more easily. In particular, Hugging Face often hosts weights, or large files with lists of numbers, which are the heart of most modern AI models.

While Hugging Face has developed some models, like BLOOM, its primary product is its website platform, where users can upload models and their weights. It also develops a series of software tools called libraries that allow users to get models working quickly, to clean up large datasets, or to evaluate their performance. It also hosts some AI models in a web interface so end users can experiment with them.


The global semiconductor talent shortage — from www2.deloitte.com
How to solve semiconductor workforce challenges

Numerous skills are required to grow the semiconductor ecosystem over the next decade. Globally, we will need tens of thousands of skilled tradespeople to build new plants to increase and localize manufacturing capacity: electricians, pipefitters, welders; thousands more graduate electrical engineers to design chips and the tools that make the chips; more engineers of various kinds in the fabs themselves, but also operators and technicians. And if we grow the back end in Europe and the Americas, that equates to even more jobs.

Each of these job groups has distinct training and educational needs; however, the number of students in semiconductor-focused programs (for example, undergraduates in semiconductor design and fabrication) has dwindled. Skills are also evolving within these job groups, in part due to automation and increased digitization. Digital skills, such as cloud, AI, and analytics, are needed in design and manufacturing more than ever.

The chip industry has long partnered with universities and engineering schools. Going forward, they also need to work more with local tech schools, vocational schools, and community colleges; and other organizations, such as the National Science Foundation in the United States.


Our principles for partnering with the music industry on AI technology — from blog.youtube (Google) by Neal Mohan, CEO, YouTube
AI is here, and we will embrace it responsibly together with our music partners.

  • Principle #1: AI is here, and we will embrace it responsibly together with our music partners.
  • Principle #2: AI is ushering in a new age of creative expression, but it must include appropriate protections and unlock opportunities for music partners who decide to participate.
  • Principle #3: We’ve built an industry-leading trust and safety organization and content policies. We will scale those to meet the challenges of AI.

Developers are now using AI for text-to-music apps — from techcrunch.com by Ivan Mehta

Brett Bauman, the developer of PlayListAI (previously LinupSupply), launched a new app called Songburst on the App Store this week. The app doesn’t have a steep learning curve. You just have to type in a prompt like “Calming piano music to listen to while studying” or “Funky beats for a podcast intro” to let the app generate a music clip.

If you can’t think of a prompt the app has prompts in different categories, including video, lo-fi, podcast, gaming, meditation and sample.


A Generative AI Primer — from er.educause.edu by Brian Basgen
Understanding the current state of technology requires understanding its origins. This reading list provides sources relevant to the form of generative AI that led to natural language processing (NLP) models such as ChatGPT.


Three big questions about AI and the future of work and learning — from workshift.opencampusmedia.org by Alex Swartsel
AI is set to transform education and work today and well into the future. We need to start asking tough questions right now, writes Alex Swartsel of JFF.

  1. How will AI reshape jobs, and how can we prepare all workers and learners with the skills they’ll need?
  2. How can education and workforce leaders equitably adopt AI platforms to accelerate their impact?
  3. How might we catalyze sustainable policy, practice, and investments in solutions that drive economic opportunity?

“As AI reshapes both the economy and society, we must collectively call for better data, increased accountability, and more flexible support for workers,” Swartsel writes.


The Current State of AI for Educators (August, 2023) — from drphilippahardman.substack.com by Dr. Philippa Hardman
A podcast interview with the University of Toronto on where we’re at & where we’re going.

 

From DSC: If this is true, how will we meet this type of demand?!?

RESKILLING NEEDED FOR 40% OF WORKFORCE BECAUSE OF AI, REPORT FROM IBM SAYS — from staffingindustry.com; via GSV

Generative AI will require skills upgrades for workers, according to a report from IBM based on a survey of executives from around the world. One finding: Business leaders say 40% of their workforces will need to reskill as AI and automation are implemented over the next three years. That could translate to 1.4 billion people in the global workforce who require upskilling, according to the company.

 

In solitary confinement, your neighbors are your teachers. — from opencampusmedia.org by Charlotte West

Kwaneta Harris, who is incarcerated in Texas, writes about helping the young women who live next to her in solitary confinement learn how to read. This essay was co-published with Slate.


San Quentin is helping men prepare for jobs outside — from college-inside.beehiiv.com by Charlotte West

I visited Employer Day at San Quentin, where 30 incarcerated men sit down with Bay Area employers and apply what they learned during a four-month job readiness program.

Job interviews can be daunting, especially if you’ve never done one before. But even more so if you’ve been incarcerated. I visited Employer Day at San Quentin in March to learn more.

 
 

Legal Innovators Assemble! Great Speakers for London in November — from artificiallawyer.com

The Legal Innovators UK conference will take place on 8 + 9 November, and we are already assembling a fantastic group of speakers from across the legal innovation ecosystem.

The two-day event comes at a time of potentially massive change for the legal market and we will be bringing you engaging panels and presentations where leading experts really dig into the issues of the day, from generative AI, to the evolution of ALSPs, to law firm innovation teams in this new era for legal tech, to how empowered legal ops groups and pioneering GCs are taking inhouse teams in new directions.

Virtual law firm Scale absorbs Texas IP firm in first acquisition — from reuters.com by Sara Merken

Aug 1 (Reuters) – Virtual law firm Scale said [on 8/1/23] that it has brought on small Texas intellectual property firm Creedon in the first of what it hopes may be a series of acquisitions.

James Creedon and two other attorneys from his firm have joined Scale, a Silicon Valley-founded law firm where lawyers work entirely remotely.

Scale, which debuted in 2020, is among so-called “distributed” or virtual firms that use technology to operate without physical offices and embrace a non-traditional law firm business model.

The lawyers are leaning into AI — from alexofftherecord.com by Alex Su
Despite all the gloom and doom, corporate legal and law firms are both embracing generative AI much more quickly than previous technologies

When I first heard law firms announcing that they were adopting AI, I was skeptical. Anyone can announce a partnership or selection/piloting of an AI vendor. It’s good PR, and doesn’t mean that the firm has truly embraced AI. But when they create their own GPT-powered tool—that feels different. Setting aside whether it’s a good idea to build your own vs. buy, it certainly feels like a real investment, especially since the firms are dedicating significant internal resources to it.

Today I’ll discuss why generative AI is diffusing across law firms much more quickly than expected.

Leading your law firm into the Gen AI Era — from jordanfurlong.substack.com by Jordan Furlong
Lawyers are embracing its promise. Clients want to reap its rewards. Here are three ways your firm can respond to the immense disruption and extraordinary opportunity of Generative AI.

  1. Move fast to implement project and client pricing.
  2. Prepare to hire fewer associates and to rethink partnership.
  3. Establish a fresh approach to developing future law firm leaders.


Above resource via BrainyActs — who mentioned that the QR code takes you to this survey. Just 3 simple questions.

Q1: Agree/Disagree: Artificial Intelligence (AI) won’t replace lawyers anytime soon. Lawyers who use AI will replace lawyers who do not use AI.

Q2: Agree/Disagree: Non-lawyers should be allowed to have an ownership interest in a law firm.

Q3 Agree/Disagree: Trained non-lawyers should be allowed to advocate for parties in lower courts.


Generative AI In The Law: Where Could This All Be Headed? — from abovethelaw.com
Findings from a new Wolters Kluwer / Above the Law survey.

To get a sense of what the legal industry predicts, Above the Law and Wolters Kluwer fielded a survey of 275 professionals from March to mid-April 2023. We asked about AI’s potential effects in varied areas of the legal industry: Will it differentiate successful firms? Which practice areas could be affected the most? Could even high-level work be transformed?

 


How to spot deepfakes created by AI image generatorsCan you trust your eyes | The deepfake election — from axios.com by various; via Tom Barrett

As the 2024 campaign season begins, AI image generators have advanced from novelties to powerful tools able to generate photorealistic images, while comprehensive regulation lags behind.

Why it matters: As more fake images appear in political ads, the onus will be on the public to spot phony content.

Go deeper: Can you tell the difference between real and AI-generated images? Take our quiz:


4 Charts That Show Why AI Progress Is Unlikely to Slow Down — from time.com; with thanks to Donald Clark out on LinkedIn for this resource


The state of AI in 2023: Generative AI’s breakout year — from McKinsey.com

Table of Contents

  1. It’s early days still, but use of gen AI is already widespread
  2. Leading companies are already ahead with gen AI
  3. AI-related talent needs shift, and AI’s workforce effects are expected to be substantial
  4. With all eyes on gen AI, AI adoption and impact remain steady
  5. About the research

Top 10 Chief AI Officers — from aimagazine.com

The Chief AI Officer is a relatively new job role, yet becoming increasingly more important as businesses invest further into AI.

Now more than ever, the workplace must prepare for AI and the immense opportunities, as well as challenges, that this type of evolving technology can provide. This job position sees the employee responsible for guiding companies through complex AI tools, algorithms and development. All of this works to ensure that the company stays ahead of the curve and capitalises on digital growth and transformation.


NVIDIA-related items

SIGGRAPH Special Address: NVIDIA CEO Brings Generative AI to LA Show — from blogs.nvidia.com by Brian Caulfield
Speaking to thousands of developers and graphics pros, Jensen Huang announces updated GH200 Grace Hopper Superchip, NVIDIA AI Workbench, updates NVIDIA Omniverse with generative AI.

The hottest commodity in AI right now isn’t ChatGPT — it’s the $40,000 chip that has sparked a frenzied spending spree — from businessinsider.com by Hasan Chowdhury

NVIDIA Releases Major Omniverse Upgrade with Generative AI and OpenUSD — from enterpriseai.news

Nvidia teams up with Hugging Face to offer cloud-based AI training — from techcrunch.com by Kyle Wiggers

Nvidia reveals new A.I. chip, says costs of running LLMs will ‘drop significantly’ — from cnbc.com by Kif Leswing

KEY POINTS

  • Nvidia announced a new chip designed to run artificial intelligence models on Tuesday .
  • Nvidia’s GH200 has the same GPU as the H100, Nvidia’s current highest-end AI chip, but pairs it with 141 gigabytes of cutting-edge memory, as well as a 72-core ARM central processor.
  • “This processor is designed for the scale-out of the world’s data centers,” Nvidia CEO Jensen Huang said Tuesday.

Nvidia Has A Monopoly On AI Chips … And It’s Only Growing — from theneurondaily.com by The Neuron

In layman’s terms: Nvidia is on fire, and they’re only turning up the heat.


AI-Powered War Machines: The Future of Warfare Is Here — from readwrite.com by Deanna Ritchie

The advancement of robotics and artificial intelligence (AI) has paved the way for a new era in warfare. Gone are the days of manned ships and traditional naval operations. Instead, the US Navy’s Task Force 59 is at the forefront of integrating AI and robotics into naval operations. With a fleet of autonomous robot ships, the Navy aims to revolutionize the way wars are fought at sea.

From DSC:
Crap. Ouch. Some things don’t seem to ever change. Few are surprised by this development…but still, this is a mess.


Sam Altman is already nervous about what AI might do in elections — from qz.com by Faustine Ngila; via Sam DeBrule
The OpenAI chief warned about the power of AI-generated media to potentially influence the vote

Altman, who has become the face of the recent hype cycle in AI development, feels that humans could be persuaded politically through conversations with chatbots or fooled by AI-generated media.


Your guide to AI: August 2023 — from nathanbenaich.substack.com by Nathan Benaich

Welcome to the latest issue of your guide to AI, an editorialized newsletter covering key developments in AI policy, research, industry, and startups. This special summer edition (while we’re producing the State of AI Report 2023!) covers our 7th annual Research and Applied AI Summit that we held in London on 23 June.

Below are some of our key takeaways from the event and all the talk videos can be found on the RAAIS YouTube channel here. If this piques your interest to join next year’s event, drop your details here.


Why generative AI is a game-changer for customer service workflows — from venturebeat.com via Superhuman

Gen AI, however, eliminates the lengthy search. It can parse a natural language query, synthesize the necessary information and serve up the answers the agent is looking for in a neatly summarized response, slashing call times dramatically.

BUT ALSO

Sam Altman: “AI Will Replace Customer Service Jobs First” — from theneurondaily.com

Excerpt:

Not only do its AI voices sound exactly like a human, but they can sound exactly like YOU.  All it takes is 6 (six!) seconds of your voice, and voila: it can replicate you saying any sentence in any tone, be it happy, sad, or angry.

The use cases are endless, but here are two immediate ones:

  1. Hyperpersonalized content.
    Imagine your favorite Netflix show but with every person hearing a slightly different script.
  2. Customer support agents. 
    We’re talking about ones that are actually helpful, a far cry from the norm!


AI has a Usability Problem — from news.theaiexchange.com
Why ChatGPT usage may actually be declining; using AI to become a spreadsheet pro

If you’re reading this and are using ChatGPT on a daily basis, congrats – you’re likely in the top couple of %.

For everyone else – AI still has a major usability problem.

From DSC:
Agreed.



From the ‘godfathers of AI’ to newer people in the field: Here are 16 people you should know — and what they say about the possibilities and dangers of the technology. — from businessinsider.com by Lakshmi Varanasi


 

InstructureCon 23 Conference Notes — from onedtech.beehiiv.com by Phil Hill

The company is increasingly emphasizing its portfolio of products built around the Canvas LMS, what they call the Instructure Unified Learning Platform. Perhaps the strongest change in message is the increased emphasis on the EdTech Collective, Instructure’s partner ecosystem. In fact, two of the three conference press releases were on the ecosystem – describing the 850 partners as “a larger partner community than any other LMS provider” and announcing a partnership with Khan Academy with its Khanmigo AI-based tutoring and teaching assistant tool (more on generative AI approach below).

Anthology Together 23 Conference Notes — from philhillaa.com by Glenda Morgan

The Anthology conference, held from July 17-19, marked the second gathering since Blackboard ceased operating as a standalone company and transformed into a brand for a product line.

What stood out was not just the number of added features but the extent to which these enhancements were driven by customer input. There has been a noticeable shift in how Anthology listens to clients, which had been a historical weakness for Blackboard. This positive change was emphasized not only by Anthology executives, but more importantly by customers themselves, even during unscripted side conversations.

D2L Fusion 23 Conference Notes — from onedtech.beehiiv.com

D2L is a slow burn company, and in the past eight years in a good way. The company started working on its move to the cloud, tied to its user experience redesign as Brightspace, in 2014. Five years later, the company’s LMS was essentially all cloud (with one or two client exceptions). More importantly, D2L Brightspace in this time period became fully competitive with Instructure Canvas, winning head-to-head competitions not just due to specialized features but more broadly in terms of general system usability and intuitive design. That multi-year transformation is significant, particularly for a founder-led company.

 

 
© 2025 | Daniel Christian