Not sure if you're behind or ahead in AI adoption? I created this guide to help you benchmark.
? ? ?
* many who think they're behind are actually on track, and some who think they're ahead are not ** these insights are my own opinion based on years of work with hundreds… pic.twitter.com/Wr28azOwDS
Then came ChatGPT, Open AI’s widely used artificial intelligence bot. For Fender, it was a no-brainer to query it for help developing deep opening questions.
The chatbot and other AI tools like it have found an eager audience among homeschoolers and microschoolers, with parents and teachers readily embracing it as a brainstorming and management tool, even as public schools take a more cautious approach, often banning it outright.
A few observers say AI may even make homeschooling more practical, opening it up to busy parents who might have balked previously.
“Not everyone is using it, but some are very excited about it,” said Amir Nathoo, co-founder of Outschool, an online education platform.
Frontier intelligence Claude 3.5 Sonnet sets new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval). It shows marked improvement in grasping nuance, humor, and complex instructions and is exceptional at writing high-quality content with a natural, relatable tone.
2x speed
State-of-the-art vision
Introducing Artifacts—a new way to use Claude We’re also introducing Artifacts on claude.ai, a new feature that expands how you can interact with Claude. When you ask Claude to generate content like code snippets, text documents, or website designs, these Artifacts appear in a dedicated window alongside your conversation.This creates a dynamic workspace where you can see, edit, and build upon Claude’s creations in real-time, seamlessly integrating AI-generated content into your projects and workflows.
If you teach computer science, user interface design, or anything involving web development, you can have students prompt Claude to produce web pages’ source code, see this code produced on the right side, preview it after it has compiled, and iterate through code+preview combinations.
If you teach economics, financial analysis, or accounting, you can have students prompt Claude to create analyses of markets or businesses, including interactive infographics, charts, or reports via React. Since it shows its work with Artifacts, your students can see how different prompts result in different statistical analyses, different representations of this information, and more.
If you teach subjects that produce purely textual outputs without a code intermediary, like philosophy, creative writing, or journalism, your students can compare prompting techniques, easily review their work, note common issues, and iterate drafts by comparing versions.
I see this as the first serious step towards improving the otherwise terrible user interfaces of LLMs for broad use. It may turn out to be a small change in the grand scheme of things, but it sure feels like a big improvement — especially in the pedagogical context.
And speaking of training students on AI, also see:
Schools could enhance their curricula by incorporating debate, Model UN and mock government programs, business plan competitions, internships and apprenticeships, interdisciplinary and project-based learning initiatives, makerspaces and innovation labs, community service-learning projects, student-run businesses or non-profits, interdisciplinary problem-solving challenges, public speaking, and presentation skills courses, and design thinking workshop.
These programs foster essential skills such as recognizing and addressing complex challenges, collaboration, sound judgment, and decision-making. They also enhance students’ ability to communicate with clarity and precision, while nurturing creativity and critical thinking. By providing hands-on, real-world experiences, these initiatives bridge the gap between theoretical knowledge and practical application, preparing students more effectively for the multifaceted challenges they will face in their future academic and professional lives.
Our vision for Claude has always been to create AI systems that work alongside people and meaningfully enhance their workflows. As a step in this direction, Claude.ai Pro and Team users can now organize their chats into Projects, bringing together curated sets of knowledge and chat activity in one place—with the ability to make their best chats with Claude viewable by teammates. With this new functionality, Claude can enable idea generation, more strategic decision-making, and exceptional results.
Projects are available on Claude.ai for all Pro and Team customers, and can be powered by Claude 3.5 Sonnet, our latest release which outperforms its peers on a wide variety of benchmarks. Each project includes a 200K context window, the equivalent of a 500-page book, so users can add all of the relevant documents, code, and insights to enhance Claude’s effectiveness.
And to understand the value of AI, they need to do R&D. Since AI doesn’t work like traditional software, but more like a person (even though it isn’t one), there is no reason to suspect that the IT department has the best AI prompters, nor that it has any particular insight into the best uses of AI inside an organization. IT certainly plays a role, but the actual use cases will come from workers and managers who find opportunities to use AI to help them with their job. In fact, for large companies, the source of any real advantage in AI will come from the expertise of their employees, which is needed to unlock the expertise latent in AI.
Ilya Sutskever, OpenAI’s co-founder and former chief scientist, is starting a new AI company focused on safety. In a post on Wednesday, Sutskever revealed Safe Superintelligence Inc. (SSI), a startup with “one goal and one product:” creating a safe and powerful AI system.
Ilya Sutskever Has a New Plan for Safe Superintelligence — from bloomberg.com by Ashlee Vance (behind a paywall) OpenAI’s co-founder discloses his plans to continue his work at a new research lab focused on artificial general intelligence.
Ilya Sutskever is kind of a big deal in AI, to put it lightly.
Part of OpenAI’s founding team, Ilya was Chief Data Scientist (read: genius) before being part of the coup that fired Sam Altman.
… Yesterday, Ilya announced that he’s forming a new initiative called Safe Superintelligence.
If AGI = AI that can perform a wide range of tasks at our level, then Superintelligence = an even more advanced AI that surpasses human capabilities in all areas.
As the tech giants compete in a global AI arms race, a frenzy of data center construction is sweeping the country. Some computing campuses require as much energy as a modest-sized city, turning tech firms that promised to lead the way into a clean energy future into some of the world’s most insatiable guzzlers of power. Their projected energy needs are so huge, some worry whether there will be enough electricity to meet them from any source.
Federal officials, AI model operators and cybersecurity companies ran the first joint simulation of a cyberattack involving a critical AI system last week.
Why it matters: Responding to a cyberattack on an AI-enabled system will require a different playbook than the typical hack, participants told Axios.
The big picture: Both Washington and Silicon Valley are attempting to get ahead of the unique cyber threats facing AI companies before they become more prominent.
Immediately after we saw Sora-like videos from KLING, Luma AI’s Dream Machine video results overshadowed them.
…
Dream Machine is a next-generation AI video model that creates high-quality, realistic shots from text instructions and images.
Introducing Gen-3 Alpha — from runwayml.com by Anastasis Germanidis A new frontier for high-fidelity, controllable video generation.
AI-Generated Movies Are Around the Corner — from news.theaiexchange.com by The AI Exchange The future of AI in filmmaking; participate in our AI for Agencies survey
AI-Generated Feature Films Are Around the Corner.
We predict feature-film length AI-generated films are coming by the end of 2025, if not sooner.
It further seems to me that there is increasingly a divide in the use of generative AI between larger firms and smaller firms. Some will jump on me for saying that, because there are clearly smaller firms that are leading the pack in their development and use of generative AI. (I’m looking at you, Siskind Susser.) By the same token, there are larger firms that have locked their doors to generative AI.
But of the firms that are most openly incorporating generative AI into their workflows, they seem mostly to be larger firms. There is good reason for this. Larger firms have innovation officers and KM professionals and others on staff who are leading the charge on generative AI. Thanks to them, those firms are better equipped to survey the AI landscape and test products under controlled and secure conditions.
Eighteen months ago, the first-of-its-kind Judicial Innovation Fellowship launched with the mission of embedding experienced technologists and designers within state, local, and tribal courts to develop technology-based solutions to improve the public’s access to justice. Housed within the Institute for Technology Law & Policy at Georgetown University Law Center, the program was designed to be a catalyst for innovation to enable courts to better serve the legal needs of the public.
The advances with generative AI tools open new career opportunities for lawyers, from legal tech consultants to junior lawyers supervising AI systems.
…
Institutions like Harvard Law School and Yale Law School are introducing courses that focus on AI’s implications in the legal field and such career oppotunities continue to arise.
…
Pursuing a career in Legal AI requires a unique blend of legal knowledge and technical skills. There are various educational pathways can equip aspiring professionals with these competencies.
We have to provide instructors the support they need to leverage educational technologies like generative AI effectively in the service of learning. Given the amount of benefit that could accrue to students if powerful tools like generative AI were used effectively by instructors, it seems unethical not to provide instructors with professional development that helps them better understand how learning occurs and what effective teaching looks like. Without more training and support for instructors, the amount of student learning higher education will collectively “leave on the table” will only increase as generative AI gets more and more capable. And that’s a problem.
From DSC: As is often the case, David put together a solid posting here. A few comments/reflections on it:
I agree that more training/professional development is needed, especially regarding generative AI. This would help achieve a far greater ROI and impact.
The pace of change makes it difficult to see where the sand is settling…and thus what to focus on
The Teaching & Learning Groups out there are also trying to learn and grow in their knowledge (so that they can train others)
The administrators out there are also trying to figure out what all of this generative AI stuff is all about; and so are the faculty members. It takes time for educational technologies’ impact to roll out and be integrated into how people teach.
As we’re talking about multiple disciplines here, I think we need more team-based content creation and delivery.
There needs to be more research on how best to use AI — again, it would be helpful if the sand settled a bit first, so as not to waste time and $$. But then that research needs to be piped into the classrooms far better. .
Introducing Gen-3 Alpha: Runway’s new base model for video generation.
Gen-3 Alpha can create highly detailed videos with complex scene changes, a wide range of cinematic choices, and detailed art directions.https://t.co/YQNE3eqoWf
Introducing GEN-3 Alpha – The first of a series of new models built by creatives for creatives. Video generated with @runwayml‘s new Text-2-Video model.
Learning personalisation. LinkedIn continues to be bullish on its video-based learning platform, and it appears to have found a strong current among users who need to skill up in AI. Cohen said that traffic for AI-related courses — which include modules on technical skills as well as non-technical ones such as basic introductions to generative AI — has increased by 160% over last year.
You can be sure that LinkedIn is pushing its search algorithms to tap into the interest, but it’s also boosting its content with AI in another way.
For Premium subscribers, it is piloting what it describes as “expert advice, powered by AI.” Tapping into expertise from well-known instructors such as Alicia Reece, Anil Gupta, Dr. Gemma Leigh Roberts and Lisa Gates, LinkedIn says its AI-powered coaches will deliver responses personalized to users, as a “starting point.”
These will, in turn, also appear as personalized coaches that a user can tap while watching a LinkedIn Learning course.
Personalized learning for everyone: Whether you’re looking to change or not, the skills required in the workplace are expected to change by 68% by 2030.
Expert advice, powered by AI: We’re beginning to pilot the ability to get personalized practical advice instantly from industry leading business leaders and coaches on LinkedIn Learning, all powered by AI. The responses you’ll receive are trained by experts and represent a blend of insights that are personalized to each learner’s unique needs. While human professional coaches remain invaluable, these tools provide a great starting point.
Personalized coaching, powered by AI, when watching a LinkedIn course: As learners —including all Premium subscribers — watch our new courses, they can now simply ask for summaries of content, clarify certain topics, or get examples and other real-time insights, e.g. “Can you simplify this concept?” or “How does this apply to me?”
How Learning Designers Are Using AI for Analysis — from drphilippahardman.substack.com by Dr. Philippa Hardman A practical guide on how to 10X your analysis process using free AI tools, based on real use cases
There are three key areas where AI tools make a significant impact on how we tackle the analysis part of the learning design process:
Understanding the why: what is the problem this learning experience solves? What’s the change we want to see as a result?
Defining the who: who do we need to target in order to solve the problem and achieve the intended goal?
Clarifying the what: given who our learners are and the goal we want to achieve, what concepts and skills do we need to teach?
Two new surveys, both released this month, show how high school and college-age students are embracing artificial intelligence. There are some inconsistencies and many unanswered questions, but what stands out is how much teens are turning to AI for information and to ask questions, not just to do their homework for them. And they’re using it for personal reasons as well as for school. Another big takeaway is that there are different patterns by race and ethnicity with Black, Hispanic and Asian American students often adopting AI faster than white students.
We’ve ceded so much trust to digital systems already that most simply assume a tool is safe to use with students because a company published it. We don’t check to see if it is compliant with any existing regulations. We don’t ask what powers it. We do not question what happens to our data or our student’s data once we upload it. We likewise don’t know where its information came from or how it came to generate human-like responses. The trust we put into these systems is entirely unearned and uncritical.
…
The allure of these AI tools for teachers is understandable—who doesn’t want to save time on the laborious process of designing lesson plans and materials? But we have to ask ourselves what is lost when we cede the instructional design process to an automated system without critical scrutiny.
From DSC: I post this to be a balanced publisher of information. I don’t agree with everything Marc says here, but he brings up several solids points.
As news about generative artificial intelligence (GenAI) continually splashes across social media feeds, including how ChatGPT 4o can help you play Rock, Paper, Scissors with a friend, breathtaking pronouncements about GenAI’s “disruptive” impact aren’t hard to find.
It turns out that it doesn’t make much sense to talk about GenAI as being “disruptive” in and of itself.
Can it be part of a disruptive innovation? You bet.
But much more important than just the AI technology in determining whether something is disruptive is the business model in which the AI is used—and its competitive impact on existing products and services in different markets.
On a somewhat note, also see:
National summit explores how digital education can promote deeper learning — from digitaleducation.stanford.edu by Jenny Robinson; via Eric Kunnen on Linkedin.com The conference, held at Stanford, was organized to help universities imagine how digital innovation can expand their reach, improve learning, and better serve the public good.
The summit was organized around several key questions: “What might learning design, learning technologies, and educational media look like in three, five, or ten years at our institutions? How will blended and digital education be poised to advance equitable, just, and accessible education systems and contribute to the public good? What structures will we need in place for our teams and offices?”
Dream Machine is an AI model that makes high quality, realistic videos fast from text and images.
It is a highly scalable and efficient transformer model trained directly on videos making it capable of generating physically accurate, consistent and eventful shots. Dream Machine is our first step towards building a universal imagination engine and it is available to everyone now!
Luma AI just dropped a Sora-like AI video generator called Dream Machine.
But unlike Sora or KLING, it’s completely open access to the public.
From DSC: Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.
AI Policy 101: a Beginners’ Framework — from drphilippahardman.substack.com by Dr. Philippa Hardman How to make a case for AI experimentation & testing in learning & development
The role of learning & development
Given these risks, what can L&D professionals do to ensure generative AI contributes to effective learning? The solution lies in embracing the role of trusted learning advisors, guiding the use of AI tools in a way that prioritizes achieving learning outcomes over only speed. Here are three key steps to achieve this:
1. Playtest and Learn About AI… 2. Set the Direction for AI to Be Learner-Centered…
3. Become Trusted Learning Advisors…
In the last year, AI has become even more intertwined with our education system. More teachers, parents, and students are aware of it and have used it themselves on a regular basis. It is all over our education system today.
While negative views of AI have crept up over the last year, students, teachers, and parents feel very positive about it in general. On balance they see positive uses for the technology in school, especially if they have used it themselves.
Most K-12 teachers, parents, and students don’t think their school is doing much about AI, despite its widespread use. Most say their school has no policy on it, is doing nothing to offer desired teacher training, and isn’t meeting the demand of students who’d like a career in a job that will need AI.
The AI vacuum in school policy means it is currently used “unauthorized,” while instead people want policies that encourage AI. Kids, parents, and teachers are figuring it out on their own/without express permission, whereas all stakeholders would rather have a policy that explicitly encourages AI from a thoughtful foundation.
There is much discourse about the rise and prevalence of AI in education and beyond. These debates often lack the perspectives of key stakeholders – parents, students and teachers.
In 2023, the Walton Family Foundation commissioned the first national survey of teacher and student attitudes toward ChatGPT. The findings showed that educators and students embrace innovation and are optimistic that AI can meaningfully support traditional instruction.
A new survey conducted May 7-15, 2024, showed that knowledge of and support for AI in education is growing among parents, students and teachers. More than 80% of each group says it has had a positive impact on education.
Apple announced “Apple Intelligence” at WWDC 2024, its name for a new suite of AI features for the iPhone, Mac, and more. Starting later this year, Apple is rolling out what it says is a more conversational Siri, custom, AI-generated “Genmoji,” and GPT-4o access that lets Siri turn to OpenAI’s chatbot when it can’t handle what you ask it for.
SAN FRANCISCO — Apple officially launched itself into the artificial intelligence arms race, announcing a deal with ChatGPT maker OpenAI to use the company’s technology in its products and showing off a slew of its own new AI features.
The announcements, made at the tech giant’s annual Worldwide Developers Conference on Monday in Cupertino, Calif., are aimed at helping the tech giant keep up with competitors such as Google and Microsoft, which have boasted in recent months that AI makes their phones, laptops and software better than Apple’s. In addition to Apple’s own homegrown AI tech, the company’s phones, computers and iPads will also have ChatGPT built in “later this year,” a huge validation of the importance of the highflying start-up’s tech.
The highly anticipated AI partnership is the first of its kind for Apple, which has been regarded by analysts as slower to adopt artificial intelligence than other technology companies such as Microsoft and Google.
The deal allows Apple’s millions of users to access technology from OpenAI, one of the highest-profile artificial intelligence companies of recent years. OpenAI has already established partnerships with a variety of technology and publishing companies, including a multibillion-dollar deal with Microsoft.
The real deal here is that Apple is literally putting AI into the hands of >1B people, most of whom will probably be using AI for the 1st time. And it’s delivering AI that’s actually useful (forget those Genmojis, we’re talking about implanting ChatGPT-4o’s brain into Apple devices).
It’s WWDC 2024 keynote time! Each year Apple kicks off its Worldwide Developers Conference with a few hours of just straight announcements, like the long-awaited Apple Intelligence and a makeover for smart AI assistant, Siri. We expected much of them to revolve around the company’s artificial intelligence ambitions (and here), and Apple didn’t disappoint. We also bring you news about Vision Pro and lots of feature refreshes.
Why Gamma is great for presentations — from Jeremy Caplan
Gamma has become one of my favorite new creativity tools. You can use it like Powerpoint or Google Slides, adding text and images to make impactful presentations. It lets you create vertical, square or horizontal slides. You can embed online content to make your deck stand out with videos, data or graphics. You can even use it to make quick websites.
Its best feature, though, is an easy-to-use application of AI. The AI will learn from any document you import, or you can use a text prompt to create a strong deck or site instantly. .
ChatGPT has 180.5 million users out of which 100 million users are active weekly.
In January 2024, ChatGPT got 2.3 billion website visits and 2 million developers are using its API.
The highest percentage of ChatGPT users belong to USA (46.75%), followed by India (5.47%). ChatGPT is banned in 7 countries including Russia and China.
OpenAI’s projected revenue from ChatGPT is $2billion in 2024.
Running ChatGPT costs OpenAI around $700,000 daily.
Sam Altman is seeking $7 trillion for a global AI chip project while Open AI is also listed as a major shareholder in Reddit.
ChatGPT offers a free version with GPT-3.5 and a Plus version with GPT-4, which is 40% more accurate and 82% safer costing $20 per month.
ChatGPT is being used for automation, education, coding, data-analysis, writing, etc.
43% of college students and 80% of the Fortune 500 companies are using ChatGPT.
A 2023 study found 25% of US companies surveyed saved $50K-$70K using ChatGPT, while 11% saved over $100K.
In an ever-evolving digital landscape, the fusion of technology and legal services has ushered in a new era of efficiency, accessibility, and innovation. The traditional image of legal professionals buried in stacks of paperwork and endless research has been transformed by cutting-edge technologies that promise to revolutionize how legal services are delivered, accessed, and executed. From artificial intelligence to blockchain, cloud computing to automation, the impact of technology on modern legal services is palpable and profound.
Technology Trends Shaping Legal Services The legal industry is experiencing a seismic shift driven by technology, with key trends reshaping legal services. Artificial Intelligence (AI) is revolutionizing legal research, document analysis, and predictive analytics, enabling legal professionals to streamline their workflow and deliver more accurate and timely insights to clients. Blockchain technology improves the safety and transparency of legal transactions, while cloud computing optimizes data storage and accessibility in the legal sector.
A new survey finds that clients care deeply about their attorney’s tech tools and tech skills. The numbers don’t lie: Legal tech matters. An efficient, integrated system is no longer “nice to have.” It’s table stakes, from case management to client communications to online filing and billing.
Every six months or so, I write a guide to doing stuff with AI. A lot has changed since the last guide, while a few important things have stayed the same. It is time for an update.
…
To learn to do serious stuff with AI, choose a Large Language Model and just use it to do serious stuff – get advice, summarize meetings, generate ideas, write, produce reports, fill out forms, discuss strategy – whatever you do at work, ask the AI to help. A lot of people I talk to seem to get the most benefit from engaging the AI in conversation, often because it gives good advice, but also because just talking through an issue yourself can be very helpful. I know this may not seem particularly profound, but “always invite AI to the table” is the principle in my book that people tell me had the biggest impact on them. You won’t know what AI can (and can’t) do for you until you try to use it for everything you do. And don’t sweat prompting too much, though here are some useful tips, just start a conversation with AI and see where it goes.
You do need to use one of the most advanced frontier models, however.