ChatGPT is Everywhere — from chronicle.com by Beth McMurtrie
Love it or hate it, academics can’t ignore the already pervasive technology.

Excerpt:

Many academics see these tools as a danger to authentic learning, fearing that students will take shortcuts to avoid the difficulty of coming up with original ideas, organizing their thoughts, or demonstrating their knowledge. Ask ChatGPT to write a few paragraphs, for example, on how Jean Piaget’s theories on childhood development apply to our age of anxiety and it can do that.

Other professors are enthusiastic, or at least intrigued, by the possibility of incorporating generative AI into academic life. Those same tools can help students — and professors — brainstorm, kick-start an essay, explain a confusing idea, and smooth out awkward first drafts. Equally important, these faculty members argue, is their responsibility to prepare students for a world in which these technologies will be incorporated into everyday life, helping to produce everything from a professional email to a legal contract.

“Artificial-intelligence tools present the greatest creative disruption to learning that we’ve seen in my lifetime.”

Sarah Eaton, associate professor of education at the University of Calgary



Artificial intelligence and academic integrity, post-plagiarism — from universityworldnews.com Sarah Elaine Eaton; with thanks to Robert Gibson out on LinkedIn for the resource

Excerpt:

The use of artificial intelligence tools does not automatically constitute academic dishonesty. It depends how the tools are used. For example, apps such as ChatGPT can be used to help reluctant writers generate a rough draft that they can then revise and update.

Used in this way, the technology can help students learn. The text can also be used to help students learn the skills of fact-checking and critical thinking, since the outputs from ChatGPT often contain factual errors.

When students use tools or other people to complete homework on their behalf, that is considered a form of academic dishonesty because the students are no longer learning the material themselves. The key point is that it is the students, and not the technology, that is to blame when students choose to have someone – or something – do their homework for them.

There is a difference between using technology to help students learn or to help them cheat. The same technology can be used for both purposes.

From DSC:
These couple of sentences…

In the age of post-plagiarism, humans use artificial intelligence apps to enhance and elevate creative outputs as a normal part of everyday life. We will soon be unable to detect where the human written text ends and where the robot writing begins, as the outputs of both become intertwined and indistinguishable.

…reminded me of what’s been happening within the filmmaking world for years (i.e., such as in Star Wars, Jurrasic Park, and many others). It’s often hard to tell what’s real and what’s been generated by a computer.
 

You are not a parrot — from nymag.com by Elizabeth Weil and Emily M. Bender

You Are Not a Parrot. And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.

Excerpts:

A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?”

Bender knows she’s no match for a trillion-dollar game changer slouching to life. But she’s out there trying. Others are trying too. LLMs are tools made by specific people — people who stand to accumulate huge amounts of money and power, people enamored with the idea of the singularity. The project threatens to blow up what is human in a species sense. But it’s not about humility. It’s not about all of us. It’s not about becoming a humble creation among the world’s others. It’s about some of us — let’s be honest — becoming a superspecies. This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.

 

Introducing Q-Chat, the world’s first AI tutor built with OpenAI’s ChatGPT — from quizlet.com by Lex Bayer

Excerpt:

Modeled on research demonstrating that the most effective form of learning is one-on-one tutoring1, Q-Chat offers students the experience of interacting with a personal AI tutor in an effective and conversational way. Whether they’re learning French vocabulary or Roman History, Q-Chat engages students with adaptive questions based on relevant study materials delivered through a fun chat experience. Pulling from Quizlet’s massive educational content library and using the question-based Socratic method to promote active learning, Q-Chat has the ability to test a student’s knowledge of educational content, ask in-depth questions to get at underlying concepts, test reading comprehension, help students learn a language and encourage students on healthy learning habits.

Quizlet's Q-Chat -- choose a study prompt to be quizzed on the material, to deepen your understanding or to learn through a story.

 

How ChatGPT is going to change the future of work and our approach to education — from livemint.com

From DSC: 
I thought that the article made a good point when it asserted:

The pace of technological advancement is booming aggressively and conversations around ChatGPT snatching away jobs are becoming more and more frequent. The future of work is definitely going to change and that makes it clear that the approach toward education is also demanding a big shift.

A report from Dell suggests that 85% of jobs that will be around in 2030 do not exist yet. The fact becomes important as it showcases that the jobs are not going to vanish, they will just change and most of the jobs by 2030 will be new.

The Future of Human Agency — from pewresearch.org by Janna Anderson and Lee Rainie

Excerpt:

Thus the question: What is the future of human agency? Pew Research Center and Elon University’s Imagining the Internet Center asked experts to share their insights on this; 540 technology innovators, developers, business and policy leaders, researchers, academics and activists responded. Specifically, they were asked:

By 2035, will smart machines, bots and systems powered by artificial intelligence be designed to allow humans to easily be in control of most tech-aided decision-making that is relevant to their lives?

The results of this nonscientific canvassing:

    • 56% of these experts agreed with the statement that by 2035 smart machines, bots and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making.
    • 44% said they agreed with the statement that by 2035 smart machines, bots and systems will be designed to allow humans to easily be in control of most tech-aided decision-making.

What are the things humans really want agency over? When will they be comfortable turning to AI to help them make decisions? And under what circumstances will they be willing to outsource decisions altogether to digital systems?

The next big threat to AI might already be lurking on the web — from zdnet.com by Danny Palmer; via Sam DeBrule
Artificial intelligence experts warn attacks against datasets used to train machine-learning tools are worryingly cheap and could have major consequences.

Excerpts:

Data poisoning occurs when attackers tamper with the training data used to create deep-learning models. This action means it’s possible to affect the decisions that the AI makes in a way that is hard to track.

By secretly altering the source information used to train machine-learning algorithms, data-poisoning attacks have the potential to be extremely powerful because the AI will be learning from incorrect data and could make ‘wrong’ decisions that have significant consequences.

Why AI Won’t Cause Unemployment — from pmarca.substack.com by Marc Andreessen

Excerpt:

Normally I would make the standard arguments against technologically-driven unemployment — see good summaries by Henry Hazlitt (chapter 7) and Frédéric Bastiat (his metaphor directly relevant to AI). And I will come back and make those arguments soon. But I don’t even think the standand arguments are needed, since another problem will block the progress of AI across most of the economy first.

Which is: AI is already illegal for most of the economy, and will be for virtually all of the economy.

How do I know that? Because technology is already illegal in most of the economy, and that is becoming steadily more true over time.

How do I know that? Because:


From DSC:
And for me, it boils down to an inconvenient truth: What’s the state of our hearts and minds?

AI, ChatGPT, Large Language Models (LLMs), and the like are tools. How we use such tools varies upon what’s going on in our hearts and minds. A fork can be used to eat food. It can also be used as a weapon. I don’t mean to be so blunt, but I can’t think of another way to say it right now.

  • Do we care about one another…really?
  • Has capitalism gone astray?
  • Have our hearts, our thinking, and/or our mindsets gone astray?
  • Do the products we create help or hurt others? It seems like too many times our perspective is, “We will sell whatever they will buy, regardless of its impact on others — as long as it makes us money and gives us the standard of living that we want.” Perhaps we could poll some former executives from Philip Morris on this topic.
  • Or we will develop this new technology because we can develop this new technology. Who gives a rat’s tail about the ramifications of it?

 

Meet CoCounsel — “the world’s first AI legal assistant” — from casetext.com

Excerpt:

As we shared in our official press release, we’ve been collaborating with OpenAI to build CoCounsel on their latest, most advanced large language model. It was a natural fit between our two teams. OpenAI, the world leader in generative AI, selected Casetext to create a product powered by its technology that was suitable for professional use by lawyers. Our experience leading legal tech since 2013 and applying large language models to the law for over five years made us an ideal choice.

Meet CoCounsel -- the world's first AI legal assistant -- from casetext

From DSC:
I look forward to seeing more vendors and products getting into the legaltech space — ones that use AI and other technologies to make significant progress on the access to justice issues that we have here in the United States.

 


Speaking of AI-related items, also see:

OpenAI debuts Whisper API for speech-to-text transcription and translation — from techcrunch.com by Kyle Wiggers

Excerpt:

To coincide with the rollout of the ChatGPT API, OpenAI today launched the Whisper API, a hosted version of the open source Whisper speech-to-text model that the company released in September.

Priced at $0.006 per minute, Whisper is an automatic speech recognition system that OpenAI claims enables “robust” transcription in multiple languages as well as translation from those languages into English. It takes files in a variety of formats, including M4A, MP3, MP4, MPEG, MPGA, WAV and WEBM.

Introducing ChatGPT and Whisper APIs — from openai.com
Developers can now integrate ChatGPT and Whisper models into their apps and products through our API.

Excerpt:

ChatGPT and Whisper models are now available on our API, giving developers access to cutting-edge language (not just chat!) and speech-to-text capabilities.



Everything you wanted to know about AI – but were afraid to ask — from theguardian.com by Dan Milmo and Alex Hern
From chatbots to deepfakes, here is the lowdown on the current state of artificial intelligence

Excerpt:

Barely a day goes by without some new story about AI, or artificial intelligence. The excitement about it is palpable – the possibilities, some say, are endless. Fears about it are spreading fast, too.

There can be much assumed knowledge and understanding about AI, which can be bewildering for people who have not followed every twist and turn of the debate.

 So, the Guardian’s technology editors, Dan Milmo and Alex Hern, are going back to basics – answering the questions that millions of readers may have been too afraid to ask.


Nvidia CEO: “We’re going to accelerate AI by another million times” — from
In a recent earnings call, the boss of Nvidia Corporation, Jensen Huang, outlined his company’s achievements over the last 10 years and predicted what might be possible in the next decade.

Excerpt:

Fast forward to today, and CEO Jensen Huang is optimistic that the recent momentum in AI can be sustained into at least the next decade. During the company’s latest earnings call, he explained that Nvidia’s GPUs had boosted AI processing by a factor of one million in the last 10 years.

“Moore’s Law, in its best days, would have delivered 100x in a decade. By coming up with new processors, new systems, new interconnects, new frameworks and algorithms and working with data scientists, AI researchers on new models – across that entire span – we’ve made large language model processing a million times faster,” Huang said.

From DSC:
NVIDA is the inventor of the Graphics Processing Unit (GPU), which creates interactive graphics on laptops, workstations, mobile devices, notebooks, PCs, and more. They are a dominant supplier of artificial intelligence hardware and software.


 


9 ways ChatGPT will help CIOs — from enterprisersproject.com by Katie Sanders
What are the potential benefits of this popular tool? Experts share how it can help CIOs be more efficient and bring competitive differentiation to their organizations.

Excerpt:

Don’t assume this new technology will replace your job. As Mark Lambert, a senior consultant at netlogx, says, “CIOs shouldn’t view ChatGPT as a replacement for humans but as a new and exciting tool that their IT teams can utilize. From troubleshooting IT issues to creating content for the company’s knowledge base, artificial intelligence can help teams operate more efficiently and effectively.”



Would you let ChatGPT control your smart home? — from theverge.com by

While the promise of an inherently competent, eminently intuitive voice assistant — a flawless butler for your home — is very appealing, I fear the reality could be more Space Odyssey than Downton Abbey. But let’s see if I’m proven wrong.


How ChatGPT Is Being Used To Enhance VR Training — from vrscout.com by Kyle Melnick

Excerpt:

The company claims that its VR training program can be used to prepare users for a wide variety of challenging scenarios, whether you’re a recent college graduate preparing for a difficult job interview or a manager simulating a particularly tough performance review. Users can customize their experiences depending on their role and receive real-time feedback based on their interactions with the AI.


From DSC:
Below are some example topics/articles involving healthcare and AI. 


Role of AI in Healthcare — from doctorsexplain.media
The role of Artificial Intelligence (AI) in healthcare is becoming increasingly important as technology advances. AI has the potential to revolutionize the healthcare industry, from diagnosis and treatment to patient care and management. AI can help healthcare providers make more accurate diagnoses, reduce costs, and improve patient outcomes.

60% of patients uncomfortable with AI in healthcare settings, survey finds — from healthcaredive.com by Hailey Mensik

Dive Brief:

  • About six in 10 U.S. adults said they would feel uncomfortable if their provider used artificial intelligence tools to diagnose them and recommend treatments in a care setting, according to a survey from the Pew Research Center.
  • Some 38% of respondents said using AI in healthcare settings would lead to better health outcomes while 33% said it would make them worse, and 27% said it wouldn’t make much of a difference, the survey found.
  • Ultimately, men, younger people and those with higher education levels were the most open to their providers using AI.

The Rise of the Superclinician – How Voice AI Can Improve the Employee Experience in Healthcare — from medcitynews.com by Tomer Garzberg
Voice AI is the new frontier in healthcare. With its constantly evolving landscape, the healthcare […]

Excerpt:

Voice AI can generate up to 30% higher clinician productivity, by automating these healthcare use cases

  • Updating records
  • Provider duress
  • Platform orchestration
  • Shift management
  • Client data handoff
  • Home healthcare
  • Maintenance
  • Equipment ordering
  • Meal preferences
  • Case data queries
  • Patient schedules
  • Symptom logging
  • Treatment room setup
  • Patient condition education
  • Patient support recommendations
  • Medication advice
  • Incident management
  • … and many more

ChatGPT is poised to upend medical information. For better and worse. — from usatoday.com by Karen Weintraub

Excerpt:

But – and it’s a big “but” – the information these digital assistants provide might be more inaccurate and misleading than basic internet searches.

“I see no potential for it in medicine,” said Emily Bender, a linguistics professor at the University of Washington. By their very design, these large-language technologies are inappropriate sources of medical information, she said.

Others argue that large language models could supplement, though not replace, primary care.

“A human in the loop is still very much needed,” said Katie Link, a machine learning engineer at Hugging Face, a company that develops collaborative machine learning tools.

Link, who specializes in health care and biomedicine, thinks chatbots will be useful in medicine someday, but it isn’t yet ready.

 

Planning for AGI and beyond — from OpenAI.org by Sam Altman

Excerpt:

There are several things we think are important to do now to prepare for AGI.

First, as we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally.

A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.

*AGI stands for Artificial General Intelligence

 

What ChatGPT And Generative AI Mean For Your Business? — from forbes.com by Gil Press [behind a paywall]

Excerpt:

Challenges abound with deploying AI in general but when it comes to generative AI, businesses face a “labyrinth of problems,” according to Forrester: Generating coherent nonsense; recreating biases; vulnerability to new security challenges and attacks; trust, reliability, copyright and intellectual property issues. “Any fair discussion of the value of adopting generative AI,” says Forrester, “must acknowledge its considerable costs. Training and re-training models takes time and money, and the GPUs required to run these workloads remain expensive.”

As is always the case with the latest and greatest enterprise technologies, tools and techniques, the answer to “what’s to be done?” boils down to one word: Learn. Study what your peers have been doing in recent years with generic AI. A good starting point is the just-published All-in On AI: How Smart Companies Win Big with Artificial Intelligence.

Also relevant/see:

Generative AI is here, along with critical legal implications — from venturebeat.com by Nathaniel Bach, Eric Bergner, and Andrea Del-Carmen Gonzalez

Excerpt:

With that promise comes a number of legal implications. For example, what rights and permissions are implicated when a GAI user creates an expressive work based on inputs involving a celebrity’s name, a brand, artwork, and potentially obscene, defamatory or harassing material? What might the creator do with such a work, and how might such use impact the creator’s own legal rights and the rights of others?

This article considers questions like these and the existing legal frameworks relevant to GAI stakeholders.

 

AI starter tools for video content creation — from techthatmatters.beehiiv.com by Harsh Makadia

Excerpt:

One of the most exciting applications of AI is in the realm of content creation. What if I told you there are tools to generate videos in mins?

Try these tools today:

  • Supercreator AI: Create short form videos 10x faster
  • Lumen5: Automatically turn blog posts into videos
  • InVideo: Idea to YouTube video
  • Synthesia: Create videos from plain text in minutes
  • Narakeet: Get a professionally sounding audio or video in minutes
  • Movio: Create engaging video content
 

A quick and sobering guide to cloning yourself — from oneusefulthing.substack.com by Professor Ethan Mollick
It took me a few minutes to create a fake me giving a fake lecture.

Excerpt:

I think a lot of people do not realize how rapidly the multiple strands of generative AI (audio, text, images, and video) are advancing, and what that means for the future.

With just a photograph and 60 seconds of audio, you can now create a deepfake of yourself in just a matter of minutes by combining a few cheap AI tools. I’ve tried it myself, and the results are mind-blowing, even if they’re not completely convincing. Just a few months ago, this was impossible. Now, it’s a reality.

To start, you should probably watch the short video of Virtual Me and Real Me giving the same talk about entrepreneurship. Nothing about the Virtual Me part of the video is real, even the script was completely AI-generated.

.


From DSC:
Also, I wanted to post the resource below just because I think it’s an excellent question!

If ChatGPT Can Disrupt Google In 2023, What About Your Company? — from forbes.com by Glenn Gow

Excerpts:

Board members and corporate execs don’t need AI to decode the lessons to be learned from this. The lessons should be loud and clear: If even the mighty Google can be potentially overthrown by AI disruption, you should be concerned about what this may mean for your company.

Professions that will be disrupted by generative AI include marketing, copywriting, illustration and design, sales, customer support, software coding, video editing, film-making, 3D modeling, architecture, engineering, gaming, music production, legal contracts, and even scientific research. Software applications will soon emerge that will make it easy and intuitive for anyone to use generative AI for those fields and more.
.


 

ChatGPT sets record for fastest-growing user base – analyst note — from reuters.com by Krystal Hu

Excerpt (emphasis DSC):

Feb 1 (Reuters) – ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history, according to a UBS study on Wednesday.

The report, citing data from analytics firm Similarweb, said an average of about 13 million unique visitors had used ChatGPT per day in January, more than double the levels of December.

“In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app,” UBS analysts wrote in the note.


From DSC:
This reminds me of the current exponential pace of change that we are experiencing…

..and how we struggle with that kind of pace.

 

Five Predictions for the Future of Learning in the Age of AI — from a16z.com by Anne Lee Skates

Excerpts:

Seeing as education is one of AI’s first consumer use cases, and programs like ChatGPT are how millions of kids, teachers, and administrators will be introduced to AI, it is critical that we pay attention to the applications of AI and its implications for our lives. Below, we explore five predictions for AI and the future of learning, knowledge, and education.

TOC for Five Predictions for the Future of Learning in the Age of AI

 

 

Radar Trends to Watch: February 2023 — from oreilly.com by Mike Loukides
Developments in Data, Programming, Security, and More

Excerpt:

One application for ChatGPT is writing documentation for developers, and providing a conversational search engine for the documentation and code. Writing internal documentation is an often omitted part of any software project.

DoNotPay has developed an AI “lawyer” that is helping a defendant make arguments in court. The lawyer runs on a cell phone, through which it hears the proceedings. It tells the defendant what to say through Bluetooth earbuds. DoNotPay’s CEO notes that this is illegal in almost all courtrooms. (After receiving threats from bar associations, DoNotPay has abandoned this trial.)

Matter, a standard for smart home connectivity, appears to be gaining momentum. Among other things, it allows devices to interact with a common controller, rather than an app (and possibly a hub) for each device.

 

Introducing: ChatGPT Edu-Mega-Prompts — from drphilippahardman.substack.com by Dr. Philippa Hardman; with thanks to Ray Schroeder out on LinkedIn for this resource
How to combine the power of AI + learning science to improve your efficiency & effectiveness as an educator

From DSC:
Before relaying some excerpts, I want to say that I get the gist of what Dr. Hardman is saying re: quizzes. But I’m surprised to hear she had so many pedagogical concerns with quizzes. I, too, would like to see quizzes used as an instrument of learning and to practice recall — and not just for assessment. But I would give quizzes a higher thumbs up than what she did. I think she was also trying to say that quizzes don’t always identify misconceptions or inaccurate foundational information. 

Excerpts:

The Bad News: Most AI technologies that have been built specifically for educators in the last few years and months imitate and threaten to spread the use of broken instructional practices (i.e. content + quiz).

The Good News: Armed with prompts which are carefully crafted to ask the right thing in the right way, educators can use AI like GPT3 to improve the effectiveness of their instructional practices.

As is always the case, ChatGPT is your assistant. If you’re not happy with the result, you can edit and refine it using your expertise, either alone or through further conversation with ChatGPT.

For example, once the first response is generated, you can ask ChatGPT to make the activity more or less complex, to change the scenario and/or suggest more or different resources – the options are endless.

Philippa recommended checking out Rob Lennon’s streams of content. Here’s an example from his Twitter account:


Also relevant/see:

3 trends that may unlock AI's potential for Learning and Development in 2023

3 Trends That May Unlock AI’s Potential for L&D in 2023 — from learningguild.com by Juan Naranjo

Excerpts:

AI-assisted design and development work
This is the trend most likely to have a dramatic evolution this year.

Solutions like large language models, speech generators, content generators, image generators, translation tools, transcription tools, and video generators, among many others, will transform the way IDs create the learning experiences our organizations use. Two examples are:

1. IDs will be doing more curation and less creation:

  • Many IDs will start pulling raw material from content generators (built using natural language processing platforms like Open AI’s GPT-3, Microsoft’s LUIS, IBM’s Watson, Google’s BERT, etc.) to obtain ideas and drafts that they can then clean up and add to the assets they are assembling. As technology advances, the output from these platforms will be more suitable to become final drafts, and the curation and clean-up tasks will be faster and easier.
  • Then, the designer can leverage a solution like DALL-E 2 (or a product developed based on it) to obtain visuals that can (or not) be modified with programs like Illustrator or Photoshop (see image below for Dall-E’s “Cubist interpretation of AI and brain science.”

2. IDs will spend less, and in some cases no time at all, creating learning pathways

AI engines contained in LXPs and other platforms will select the right courses for employees and guide these learners from their current level of knowledge and skill to their goal state with substantially less human intervention.

 


The Creator of ChatGPT Thinks AI Should Be Regulated — from time.com by John Simons

Excerpts:

Somehow, Mira Murati can forthrightly discuss the dangers of AI while making you feel like it’s all going to be OK.

A growing number of leaders in the field are warning of the dangers of AI. Do you have any misgivings about the technology?

This is a unique moment in time where we do have agency in how it shapes society. And it goes both ways: the technology shapes us and we shape it. There are a lot of hard problems to figure out. How do you get the model to do the thing that you want it to do, and how you make sure it’s aligned with human intention and ultimately in service of humanity? There are also a ton of questions around societal impact, and there are a lot of ethical and philosophical questions that we need to consider. And it’s important that we bring in different voices, like philosophers, social scientists, artists, and people from the humanities.


Whispers of A.I.’s Modular Future — from newyorker.com by James Somers; via Sam DeBrule

Excerpts:

Gerganov adapted it from a program called Whisper, released in September by OpenAI, the same organization behind ChatGPTand dall-e. Whisper transcribes speech in more than ninety languages. In some of them, the software is capable of superhuman performance—that is, it can actually parse what somebody’s saying better than a human can.

Until recently, world-beating A.I.s like Whisper were the exclusive province of the big tech firms that developed them.

Ever since I’ve had tape to type up—lectures to transcribe, interviews to write down—I’ve dreamed of a program that would do it for me. The transcription process took so long, requiring so many small rewindings, that my hands and back would cramp. As a journalist, knowing what awaited me probably warped my reporting: instead of meeting someone in person with a tape recorder, it often seemed easier just to talk on the phone, typing up the good parts in the moment.

From DSC:
Journalism majors — and even seasoned journalists — should keep an eye on this type of application, as it will save them a significant amount of time and/or money.

Microsoft Teams Premium: Cut costs and add AI-powered productivity — from microsoft.com by Nicole Herskowitz

Excerpt:

Built on the familiar, all-in-one collaborative experience of Microsoft Teams, Teams Premium brings the latest technologies, including Large Language Models powered by OpenAI’s GPT-3.5, to make meetings more intelligent, personalized, and protected—whether it’s one-on-one, large meetings, virtual appointments, or webinars.


 
© 2025 | Daniel Christian