ChatGPT: 30 incredible ways to use the AI-powered chatbot — from interestingengineering.com by Christopher McFadden You’ve heard of ChatGPT, but do you know how to use it? Or what to use it for? If not, then here are some ideas to get you started.
Excerpts:
It’s great at writing CVs and resumes
It can also read and improve the existing CV or resume
There are obvious questions like “Are the AI’s algorithms good enough?” (probably not yet) and “What will happen to Google?” (nobody knows), but I’d like to take a step back and ask some more fundamental questions: why chat? And why now?
Most people don’t realize that the AI model powering ChatGPT is not all that new. It’s a tweaked version of a foundation model, GPT-3, that launched in June 2020. Many people have built chatbots using it before now. OpenAI even has a guide in its documentation showing exactly how you can use its APIs to make one.
So what happened? The simple narrative is that AI got exponentially more powerful recently, so now a lot of people want to use it. That’s true if you zoom out. But if you zoom in, you start to see that something much more complex and interesting is happening.
This leads me to a surprising hypothesis: perhaps the ChatGPT moment never would have happened without DALL-E 2 and Stable Diffusion happening earlier in the year!
Like writing and coding before it, prompt engineering is an emergent form of thinking. It lies somewhere between conversation and query, between programming and prose. It is the one part of this fast-changing, uncertain future that feels distinctly human.
OpenAI’s ChatGPT, with new funding from Microsoft, has grown to over one million users faster than many of dominant tech companies, apps and platforms of the past decade.
Unlike the metaverse concept, which had a hype cycle based on an idea still nebulous to many, generative AI as tech’s next big thing is being built on top of decades of existing machine learning already embedded in business processes.
We asked top technology officers, specifically reaching out to many at non-tech sector companies, to break down the potential and pitfalls of AI adoption.
The contract lifecycle management company Ironclad has tapped into the power of OpenAI’s GPT-3 to introduce AI Assist, a beta feature that instantly redlines contracts based on a company’s playbook of approved clauses and language.
The redlines, made using GPT-3’s generative artificial intelligence, appear as tracked changes in Microsoft Word, where a user can then scan the recommended changes and either accept or reject them.
DAVOS, Switzerland—Microsoft Corp. MSFT 2.86%increase; green up pointing triangle plans to incorporate artificial-intelligence tools like ChatGPT into all of its products and make them available as platforms for other businesses to build on, Chief Executive Satya Nadella said.
It’s a matter of time before the LMSs like Canvas and Anthology do the same. Really going to change the complexion of online learning.
Microsoft are holding a lot of great cards in the AI game, especially ChatGPT-3, but Google also have a great hand, in fact they have a bird in the hand:
Sparrow, from Deepmind, is likely to launch soon. Their aim is to trump ChatGTP by having a chatbot that is more useful and reduces the risk of unsafe and inappropriate answers. In the released paper, they also indicate that it will have moral constraints. Smart move.
Hassabis has promised some sort of release in 2023. Their goal is to reduce wrong and invented information by linking it to Google Search and Scholar for citations.
The Art of ChatGPT Prompting: A Guide to Crafting Clear and Effective Prompts.
This free e-book acts a useful guide for beginners.
Collection of ChatGPT Resources Use ChatGPT in Google Docs, WhatsApp, as a desktop app, with your voice, or in other ways with this running list of tools.
Awesome ChatGPT prompts
Dozens of clever pre-written prompts you can use to initiate your own conversations with ChatGPT to get it to reply as a fallacy finder or a journal reviewer or whatever else.
Writing for Renegades – Co-writing with AI
This free 17-page resource has writing exercises you can try with ChatGPT. It also includes interesting nuggets, like Wycliffe A. Hill’s 1936 attempt at writing automation, Plot Genie.
We often see the battle between technology and humans as a zero-sum game. And that’s how much of the discussion about ChatGPT is being framed now. Like many others who have been experimenting with ChatGPT in recent weeks, I find that a lot of the output depends on the input. In other words, the better the human question, the better the ChatGPT answer.
So instead of seeing ourselves competing with technology, we should find ways to complement it and view ChatGPT as a tool that assists us in collecting information and in writing drafts.
If we reframe the threat, think about how much time can be freed up to read, to think, to write?
As many have noted, including Michael Horn on the Class Disrupted podcast he co-hosts, ChatGPT is to writing what calculators were once to math and other STEM disciplines.
GPT in Higher Education — from insidehighered.com by Ray Schroeder ChatGPT has caught our attention in higher education. What will it mean in 2023?
Excerpt:
Founder and CEO at Moodle Martin Dougiamas writes in Open Ed Tech that as educators, we must recognize that artificial general intelligence will become ubiquitous. “In short, we need to embrace that AI is going to be a huge part of our lives when creating anything. There is no gain in banning it or avoiding it. It’s actually easier (and better) to use this moment to restructure our education processes to be useful and appropriate in today’s environment (which is full of opportunities).”
Who, at your institution, is examining the impact of AI, and in particular GPT, upon the curriculum? Are instructional designers working with instructors in revising syllabi and embedding AI applications into the course offerings? What can you do to ensure that your university is preparing learners for the future rather than the past?
Ray Schroeder
ChatGPT Advice Academics Can Use Now — from insidehighered.com by Susan D’Agostino To harness the potential and avert the risks of OpenAI’s new chat bot, academics should think a few years out, invite students into the conversation and—most of all—experiment, not panic.
At schools including George Washington University in Washington, D.C., Rutgers University in New Brunswick, New Jersey, and Appalachian State University in Boone, North Carolina, professors are phasing out take-home, open-book assignments — which became a dominant method of assessment in the pandemic but now seem vulnerable to chatbots. They are instead opting for in-class assignments, handwritten papers, group work and oral exams.
Gone are prompts like “write five pages about this or that.” Some professors are instead crafting questions that they hope will be too clever for chatbots and asking students to write about their own lives and current events.
Why Banning ChatGPT in Class Is a Mistake — from campustechnology.com by Thomas Mennella Artificial intelligence can be a valuable learning tool, if used in the right context. Here are ways to embrace ChatGPT and encourage students to think critically about the content it produces.
Well, it was bound to happen. Anytime you have a phenomenon as disruptive as generative AI, you can expect lawsuits.
Case in point: the lawsuit recently filed by Getty Images against Stability AI, highlighting the ongoing legal challenges posed by the use of AI in the creative industries. But it’s not the only lawsuit recently filed, see e.g. Now artists sue AI image generation tools Stable Diffusion, Midjourney over copyright | Technology News, The Indian Express
Here’s the list of sources: https://t.co/fJd4rh8kLy. The larger resource area at https://t.co/bN7CReGIEC has sample ChatGPT essays, strategies for mitigating harm, and questions for teachers to ask as well as a listserv.
— Anna Mills, amills@mastodon.oeru.org, she/her (@EnglishOER) January 11, 2023
Microsoft is reportedly eyeing a $10 billion investment in OpenAI, the startup that created the viral chatbot ChatGPT, and is planning to integrate it into Office products and Bing search.The tech giant has already invested at least $1 billion into OpenAI. Some of these features might be rolling out as early as March, according to The Information.
This is a big deal. If successful, it will bring powerful AI tools to the masses.So what would ChatGPT-powered Microsoft products look like? We asked Microsoft and OpenAI. Neither was willing to answer our questions on how they plan to integrate AI-powered products into Microsoft’s tools, even though work must be well underway to do so. However, we do know enough to make some informed, intelligent guesses. Hint: it’s probably good news if, like me, you find creating PowerPoint presentations and answering emails boring.
I have maintained for several years, including a book ‘AI for Learning’, that AI is the technology of the age and will change everything. This is unfolding as we speak but it is interesting to ask who the winners are likely to be.
People who have heard of GPT-3 / ChatGPT, and are vaguely following the advances in machine learning, large language models, and image generators. Also people who care about making the web a flourishing social and intellectual space.
That dark forest is about to expand. Large Language Models (LLMs) that can instantly generate coherent swaths of human-like text have just joined the party.
It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things.”
…
“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.
Demis Hassabis
Excerpt (emphasis DSC):
Hassabis says these efforts are just the beginning. He and his colleagues have been working toward a much grander ambition: creating artificial general intelligence, or AGI, by building machines that can think, learn, and be set to solve humanity’s toughest problems.Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.
But with AI’s promise also comes peril.In recent months, researchers building an AI system to design new drugs revealed that their tool could be easily repurposed to make deadly new chemicals. A separate AI model trained to spew out toxic hate speech went viral, exemplifying the risk to vulnerable communities online. And inside AI labs around the world, policy experts were grappling with near-term questions like what to do when an AI has the potential to be commandeered by rogue states to mount widespread hacking campaigns or infer state-level nuclear secrets.
Headteachers and university lecturers have expressed concerns that ChatGPT, which can provide convincing human-sounding answers to exam questions, could spark a wave of cheating in homework and exam coursework.
Now, the bot’s makers, San Francisco-based OpenAI, are trying to counter the risk by “watermarking” the bot’s output and making plagiarism easier to spot.
Students need now, more than ever, to understand how to navigate a world in which artificial intelligence is increasingly woven into everyday life. It’s a world that they, ultimately, will shape.
We hail from two professional fields that have an outsize interest in this debate. Joanne is a veteran journalist and editor deeply concerned about the potential for plagiarism and misinformation. Rebecca is a public health expert focused on artificial intelligence, who champions equitable adoption of new technologies.
We are also mother and daughter. Our dinner-table conversations have become a microcosm of the argument around ChatGPT, weighing its very real dangers against its equally real promise. Yet we both firmly believe that a blanket ban is a missed opportunity.
ChatGPT: Threat or Menace? — from insidehighered.com by Steven Mintz Are fears about generative AI warranted?
The rapid pace of change is driven by a “perfect storm” of factors, including the falling cost of computing power, the rise of data-driven decision-making, and the increasing availability of new technologies. “The speed of current breakthroughs has no historical precedent,”concluded Andrew Doxsey, co-founder of Libra Incentix, in an interview. “Unlike previous technological revolutions, the Fourth Industrial Revolution is evolving exponentially rather than linearly. Furthermore, it disrupts almost every industry worldwide.”
An updated version of the AI chatbot ChatGPT was recently released to the public.
I got the chatbot to write cover letters for real jobs and asked hiring managers what they thought.
The managers said they would’ve given me a call but that the letters lacked personality.
.
I mentor a young lad with poor literacy skills who is starting a landscaping business. He struggles to communicate with clients in a professional manner.
I created a GPT3-powered Gmail account to which he sends a message. It responds with the text to send to the client. pic.twitter.com/nlFX9Yx6wR
Lawyers can ask GPT-3 to help write contracts in Microsoft Word thanks to legal tech startup Lexion’s new AI Contract Assist Word plugin. The new tool offers assistance in drafting and negotiating terms, as well as summarizing the contract for those not versed in legal language and marks the growing interest in applying generative AI within the legal profession.
Lexion’s AI Contract Assistant is designed to compose, adjust, and explain contracts with an eye toward streamlining their creation and approval. Lawyers with the Word plugin can write a prompt describing the goal of a contract clause, and the AI will generate one with appropriate language.
ChatGPT, Chatbots and Artificial Intelligence in Education — from ditchthattextbook.com by Matt Miller AI just stormed into the classroom with the emergence of ChatGPT. How do we teach now that it exists? How can we use it? Here are some ideas.
Excerpt: Now, we’re wondering …
What is ChatGPT? And, more broadly, what are chatbots and AI?
How is this going to impact education?
How can I teach tomorrow knowing that this exists?
Can I use this as a tool for teaching and learning?
Should we block it through the school internet filter — or try to ban it?
The tech world is abuzz over ChatGPT, a chat bot that is said to be the most advanced ever made.
It can create poems, songs, and even computer code. It convincingly constructed a passage of text on how to remove a peanut butter sandwich from a VCR, in the voice of the King James Bible.
As a PhD microbiologist, I devised a 10-question quiz that would be appropriate as a final exam for college-level microbiology students. ChatGPT blew it away.
On the one hand, yes, ChatGPT is capable of producing prose that looks convincing. But on the other hand, what it means to be convincing depends on context. The kind of prose you might find engaging and even startling in the context of a generative encounter with an AI suddenly seems just terrible in the context of a professional essay published in a magazine such as The Atlantic. And, as Warner’s comments clarify, the writing you might find persuasive as a teacher (or marketing manager or lawyer or journalist or whatever else) might have been so by virtue of position rather than meaning: The essay was extant and competent; the report was in your inbox on time; the newspaper article communicated apparent facts that you were able to accept or reject.
These lines of demarcation—the lines between when a tool can do all of a job, some of it, or none of it—are both constantly moving and critical to watch. Because they define knowledge work and point to the future of work. We need to be teaching people how to do the kinds of knowledge work that computers can’t do well and are not likely to be able to do well in the near future. Much has been written about the economic implications to the AI revolution, some of which are problematic for the employment market. But we can put too much emphasis on that part. Learning about artificial intelligence can be a means for exploring, appreciating, and refining natural intelligence. These tools are fun. I learn from using them. Those two statements are connected.
Google is planning to create a new AI feature for its Search engine, one that would rival the recently released and controversial ChatGPT from OpenAI. The company revealed this after a recent Google executive meeting that involved the likes of its CEO Sundar Pichai and AI head, Jeff Dean, that talked about the technology that the internet company already has, soon for development.
Employees from the Mountain View giant were concerned that it was behind the current AI trends to the likes of OpenAI despite already having a similar technology laying around.
And more focused on the business/vocational/corporate training worlds:
There are a lot of knowledge management, enterprise learning and enterprise search products on the market today, but what Sana believes it has struck on uniquely is a platform that combines all three to work together: a knowledge management-meets-enterprise-search-meets-e-learning platform.
Three sources briefed on OpenAI’s recent pitch to investors said the organization expects $200 million in revenue next year and $1 billion by 2024.
The forecast, first reported by Reuters, represents how some in Silicon Valley are betting the underlying technology will go far beyond splashy and sometimes flawed public demos.
“We’re going to see advances in 2023 that people two years ago would have expected in 2033. It’s going to be extremely important not just for Microsoft’s future, but for everyone’s future,” he said in an interview this week.
Professors, programmers and journalists could all be out of a job in just a few years, after the latest chatbot from the Elon Musk-founded OpenAI foundation stunned onlookers with its writing ability, proficiency at complex tasks, and ease of use.
The system, called ChatGPT, is the latest evolution of the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to generate an opinion piece for the Guardian, and ChatGPT has significant further capabilities.
In the days since it was released, academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate, and programmers have used the tool to solve coding challenges in obscure programming languages in a matter of seconds – before writing limericks explaining the functionality.
Is the college essay dead? Are hordes of students going to use artificial intelligence to cheat on their writing assignments? Has machine learning reached the point where auto-generated text looks like what a typical first-year student might produce?
And what does it mean for professors if the answer to those questions is “yes”?
…
Scholars of teaching, writing, and digital literacy say there’s no doubt that tools like ChatGPT will, in some shape or form, become part of everyday writing, the way calculators and computers have become integral to math and science. It is critical, they say, to begin conversations with students and colleagues about how to shape and harness these AI tools as an aide, rather than a substitute, for learning.
“Academia really has to look at itself in the mirror and decide what it’s going to be,” said Josh Eyler, director of the Center for Excellence in Teaching and Learning at the University of Mississippi, who has criticized the “moral panic” he has seen in response to ChatGPT. “Is it going to be more concerned with compliance and policing behaviors and trying to get out in front of cheating, without any evidence to support whether or not that’s actually going to happen? Or does it want to think about trust in students as its first reaction and building that trust into its response and its pedagogy?”
ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.
it’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.
1/Large language models like Galactica and ChatGPT can spout nonsense in a confident, authoritative tone. This overconfidence – which reflects the data they’re trained on – makes them more likely to mislead.
The thing is, a good toy has a huge advantage: People love to play with it, and the more they do, the quicker its designers can make it into something more. People are documenting their experiences with ChatGPT on Twitter, looking like giddy kids experimenting with something they’re not even sure they should be allowed to have. There’s humor, discovery and a game of figuring out the limitations of the system.
And on the legal side of things:
In the legal education context, I’ve been playing around with generating fact patterns and short documents to use in exercises.
Edging towards the end of the year, it is time for a summary of how digital health progressed in 2022. It is easy to get lost in the noise – I myself shared well over a thousand articles, studies and news items between January and the end of November 2022. Thus, just like in 2021, 2020 (and so on), I picked the 10 topics I believe will have the most significance in the future of healthcare.
9. Smart TVs Becoming A Remote Care Platform The concept of turning one’s TV into a remote care hub isn’t new. Back in 2012, researchers designed a remote health assistance system for the elderly to use through a TV set. But we are exploring this idea now as a major tech company has recently pushed for telehealth through TVs. In early 2022, electronics giant LG announced that its smart TVs will be equipped with the remote health platform Independa.
And in just a few months (late November) came a follow-up: a product called Carepoint TV Kit 200L, in beta testing now. Powered by Amwell’s Converge platform, the product is aimed at helping clinicians more easily engage with patients amid healthcare’s workforce shortage crisis.
Asynchronous telemedicine is one of those terms we will need to get used to in the coming years. Although it may sound alien, chances are you have been using some form of it for a while.
With the progress of digital health, especially due to the pandemic’s impact, remote care has become a popular approach in the healthcare setting. It can come in two forms: synchronous telemedicine and asynchronous telemedicine.
From the 45+ #books that I’ve read in last 2 years here are my top 10 recommendations for #learningdesigners or anyone in #learninganddevelopment
Speaking of recommended books (but from a more technical perspective this time), also see:
10 must-read tech books for 2023 — from enterprisersproject.com by Katie Sanders (Editorial Team) Get new thinking on the technologies of tomorrow – from AI to cloud and edge – and the related challenges for leaders
NVIDIA Teams With Microsoft to Build Massive Cloud AI Computer— from nvidianews.nvidia.com Tens of Thousands of NVIDIA GPUs, NVIDIA Quantum-2 InfiniBand and Full Stack of NVIDIA AI Software Coming to Azure; NVIDIA, Microsoft and Global Enterprises to Use Platform for Rapid, Cost-Effective AI Development and Deployment
Excerpt:
NVIDIA announced [on 11/16/22] a multi-year collaboration with Microsoft to build one of the most powerful AI supercomputers in the world, powered by Microsoft Azure’s advanced supercomputing infrastructure combined with NVIDIA GPUs, networking and full stack of AI software to help enterprises train, deploy and scale AI, including large, state-of-the-art models.
Best Webcams for Teachers and Students — from techlearning.com by Luke Edwards Get the best webcams for teachers and students to help with hybrid learning and more
Law schools have increasingly sorted along gender lines, and the makeup of faculties has become a reflection of schools’ student population, according to preprint research published on the SSRN, an open access platform for early-stage research.
Technology is changing the legal sector. The UK government has recently announced that it is investing £4m to modernise the UK legal industry through its LawTechUK programme. The initiative is a part of a drive to keep the UK at the global forefront of legal services..
When the first seeds of the legal technologist role were planted in the early 2010s, they took some time to germinate. A decade later, after a seemingly slow start, there has been an explosion of investment, awareness and new job opportunities in legal technology.
But as this new strand of the legal profession sets its roots deeper in the industry, what exactly does it take to be a legal technologist?
In another sign of the changing times we are in, leading New York law firm Shearman & Sterling is formally launching a Legal Operations capability. The move follows fellow elite rival Cleary Gottlieb launching Cleary X, its innovation-focused legal delivery arm.
A decade ago many would not have expected New York’s top firms to be that bothered with anything other than high-end legal advisory and disputes work, but the legal world is evolving.
‘Legal Operations by Shearman’ will offer a range of services including legal tech help, data analytics, and inhouse department design, but may work with ALSPs and other groups when it comes to CLM onboarding, with these other providers handling actual implementation and with Shearman focused on the bigger legal ops picture.
The real strength of weak ties — from news.stanford.edu; with thanks to Roberto Ferraro for this resource A team of Stanford, MIT, and Harvard scientists finds “weaker ties” are more beneficial for job seekers on LinkedIn.
Excerpt:
A team of researchers from Stanford, MIT, Harvard, and LinkedIn recently conducted the largest experimental study to date on the impact of digital job sites on the labor market and found that weaker social connections have a greater beneficial effect on job mobility than stronger ties.
“A practical implication of the research is that it’s helpful to reach out to people beyond your immediate friends and colleagues when looking for a new job,” explained Erik Brynjolfsson, who is the Jerry Yang and Akiko Yamazaki Professor at Stanford University. “People with whom you have weaker ties are more likely to have information or connections that are useful and relevant.”
As part of a survey on hybrid working patterns of more than 20,000 people in 11 countries, Microsoft has called for an end to ‘productivity paranoia’ with 85% of business leaders still saying they find it difficult to have confidence in staff productivity when remote working.
…
“Closing the feedback loop is key to retaining talent. Employees who feel their companies use employee feedback to drive change are more satisfied (90% vs. 69%) and engaged (89% vs. 73%) compared to those who believe their companies don’t drive change. And the employees who don’t think their companies drive change based on feedback? They’re more than twice as likely to consider leaving in the next year (16% vs. 7%) compared to those who do. And it’s not a one-way street. To build trust and participation in feedback systems, leaders should regularly share what they’re hearing, how they’re responding, and why.”
From DSC: It seems to me that trust and motivation are highly involved here. Trust in one’s employees to do their jobs. And employees who aren’t producing and have low motivation levels should consider changing jobs/industries to find something that’s much more intrinsically motivating to them. Find a cause/organization that’s worth working for.
Today, websites have turned highly engaging, and the internet is full of exciting experiences. Yet, web 3.0 is coming with noteworthy trends and things to look out for.
Here are the top 5 developments in web 3.0 expected in the coming five years. .
Some of Europe’s biggest telecoms operators have joined forces for a pilot project that aims to make holographic calls as simple and straightforward as a phone call.
Deutsche Telekom, Orange, Telefónica and Vodafone are working with holographic presence company Matsuko to develop an easy-to-use platform for immersive 3D experiences that could transform communications and the virtual events market
Advances in connectivity, thanks to 5G and edge computing technology, allow smooth and natural movement of holograms and make the possibility of easy-to-access holographic calls a reality. .
Few things are more important than delivering the right education to individuals around the globe. Whether enlightening a new generation of young students, or empowering professionals in a complex business environment, learning is the key to building a better future.
In recent years, we’ve discovered just how powerful technology can be in delivering information to those who need it most. The cloud has paved the way for a new era of collaborative remote learning, while AI tools and automated systems are assisting educators in their tasks. XR has the potential to be one of the most disruptive new technologies in the educational space.
With Extended Reality technology, training professionals can deliver incredible experiences to students all over the globe, without the risks or resource requirements of traditional education. Today, we’re looking at just some of the major vendors leading the way to a future of immersive learning.