The artificial intelligence sector has never been more competitive. Forbes received some 1,900 submissions this year, more than double last year’s count. Applicants do not pay a fee to be considered and are judged for their business promise and technical usage of AI through a quantitative algorithm and qualitative judging panels. Companies are encouraged to share data on diversity, and our list aims to promote a more equitable startup ecosystem. But disparities remain sharp in the industry. Only 12 companies have women cofounders, five of whom serve as CEO, the same count as last year. For more, see our full package of coverage, including a detailed explanation of the list methodology, videos and analyses on trends in AI.
New Generative AI video tools coming to Premiere Pro this year will streamline workflows and unlock new creative possibilities, from extending a shot to adding or removing objects in a scene
Adobe is developing a video model for Firefly, which will power video and audio editing workflows in Premiere Pro and enable anyone to create and ideate
Adobe previews early explorations of bringing third-party generative AI models from OpenAI, Pika Labs and Runway directly into Premiere Pro, making it easy for customers to draw on the strengths of different models within the powerful workflows they use every day
AI-powered audio workflows in Premiere Pro are now generally available, making audio editing faster, easier and more intuitive
AI Resources and Teaching | Kent State University offers valuable resources for educators interested in incorporating artificial intelligence (AI) into their teaching practices. The university recognizes that the rapid emergence of AI tools presents both challenges and opportunities in higher education.
The AI Resources and Teaching page provides educators with information and guidance on various AI tools and their responsible use within and beyond the classroom. The page covers different areas of AI application, including language generation, visuals, videos, music, information extraction, quantitative analysis, and AI syllabus language examples.
For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn’t engage or motivate kids, inspire them to reach new heights or even keep them focused on the material — all qualities of the best mentors.
It’s a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. “There are some things AI is actually very good for,” Nitta said, “but it’s not great as a replacement for humans.”
His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.
…
To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.”
Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”
From DSC: This is why the vision that I’ve been tracking and working on has always said that HUMAN BEINGS will be necessary — they are key to realizing this vision. Along these lines, here’s a relevant quote:
Another crucial component of a new learning theory for the age of AI would be the cultivation of “blended intelligence.” This concept recognizes that the future of learning and work will involve the seamless integration of human and machine capabilities, and that learners must develop the skills and strategies needed to effectively collaborate with AI systems. Rather than viewing AI as a threat to human intelligence, a blended intelligence approach seeks to harness the complementary strengths of humans and machines, creating a symbiotic relationship that enhances the potential of both.
Per Alexander “Sasha” Sidorkin, Head of the National Institute on AI in Society at California State University Sacramento.
Assessment of Student Learning Is Broken — from insidehighered.com by Zach Justus and Nik Janos And generative AI is the thing that broke it, Zach Justus and Nik Janos write.
Generative artificial intelligence (AI) has broken higher education assessment. This has implications from the classroom to institutional accreditation. We are advocating for a one-year pause on assessment requirements from institutions and accreditation bodies.
… Implications and Options
The data we are collecting right now are literally worthless. These same trends implicate all data gathered from December 2022 through the present. So, for instance, if you are conducting a five-year program review for institutional accreditation you should separate the data from before the fall 2022 term and evaluate it independently. Whether you are evaluating writing, STEM outputs, coding, or anything else, you are now looking at some combination of student/AI work. This will get even more confounding as AI tools become more powerful and are integrated into our existing production platforms like Microsoft Office and Google Workspace.
The burden of adapting to artificial intelligence has fallen to faculty, but we are not positioned or equipped to lead these conversations across stakeholder groups.
We keep breaking new ground in AI capabilities, and there seems little interest in asking if we should build the next model to be more life-like. You can now go to Hume.AI and have a conversation with an Empathetic Voice Interface. EVI is groundbreaking and extremely unnerving, but it is no more capable of genuine empathy than your toaster oven.
…
You can have the eLLM mimic a political campaign and call potential voters to sway their vote. You can do this ethically or program it to prey upon people with misinformation.
An eLLM can be used to socially engineer the public based on the values someone programs into it. Whose values, though?
Any company with a digital presence can use an eLLM like EVI to influence their customers. Imagine Alexa suddenly being able to empathize with you as a means to help persuade you to order more products.
An always-on, empathetic system can help a student stay on track to graduate or manipulate them into behaviors that erode their autonomy and free will.
Any foreign government could deploy such a system against a neighboring population and use empathy as a weapon to sow discontent within the opposing population.
From DSC: Marc offers some solid thoughts that should make us all pause and reflect on what he’s saying.
We can endlessly rationalize away the reasons why machines possessing such traits can be helpful, but where is the line that developers and users of such systems refuse to cross in this race to make machines more like us?
Marc Watkins
Along these lines, also see:
Student Chatbot Use ‘Could Be Increasing Loneliness’ — from insidehighered.com by Tom Williams Study finds students who rely on ChatGPT for academic tasks feel socially supported by artificial intelligence at the expense of their real-life relationships.
…
They found “evidence that while AI chatbots designed for information provision may be associated with student performance, when social support, psychological well-being, loneliness and sense of belonging are considered it has a net negative effect on achievement,” according to the paper published in Studies in Higher Education.
What if, for example, the corporate learning system knew who you were and you could simply ask it a question and it would generate an answer, a series of resources, and a dynamic set of learning objects for you to consume? In some cases you’ll take the answer and run. In other cases you’ll pour through the content. And in other cases you’ll browse through the course and take the time to learn what you need.
And suppose all this happened in a totally personalized way. So you didn’t see a “standard course” but a special course based on your level of existing knowledge?
This is what AI is going to bring us. And yes, it’s already happening today.
For over a year, GPT-4 was the dominant AI model, clearly much smarter than any of the other LLM systems available. That situation has changed in the last month, there are now three GPT-4 class models, all powering their own chatbots: GPT-4 (accessible through ChatGPT Plus or Microsoft’s CoPilot), Anthropic’s Claude 3 Opus, and Google’s Gemini Advanced1.
… Where we stand
We are in a brief period in the AI era where there are now multiple leading models, but none has yet definitively beaten the GPT-4 benchmark set over a year ago. While this may represent a plateau in AI abilities, I believe this is likely to change in the coming months as, at some point, models like GPT-5 and Gemini 2.0 will be released. In the meantime, you should be using a GPT-4 class model and using it often enough to learn what it does well. You can’t go wrong with any of them, pick a favorite and use it…
From DSC: Here’s a powerful quote from Ethan:
In fact, in my new book I postulate that you haven’t really experienced AI until you have had three sleepless nights of existential anxiety, after which you can start to be productive again.
For us, I think the biggest promise of AI tools like Sora — that can create video with ease — is that they lower the cost of immersive educational experiences. This increases the availability of these experiences, expanding their reach to student populations who wouldn’t otherwise have them, whether due to time, distance, or expense.
Consider the profound impact on a history class, where students are transported to California during the gold rush through hyperrealistic video sequences. This vivifies the historical content and cultivates a deeper connection with the material.
In fact, OpenAI has already demonstrated the promise of this sort of use case, with a very simple prompt producing impressive results…
Take this scenario. A student misses a class and, within twenty minutes, receives a series of texts and even a voicemail from a very concerned and empathic-sounding voice wanting to know what’s going on. Of course, the text is entirely generated, and the voice is synthetic as well, but the student likely doesn’t know this. To them, communication isn’t something as easy to miss or brush off as an email. It sounds like someone who cares is talking to them.
But let’s say that isn’t enough. By that evening, the student still hadn’t logged into their email or checked the LMS. The AI’s strategic reasoning is communicating with the predictive AI and analyzing the pattern of behavior against students who succeed or fail vs. students who are ill. The AI tracks the student’s movements on campus, monitors their social media usage, and deduces the student isn’t ill and is blowing off class.
The AI agent resumes communication with the student. But this time, the strategic AI adopts a different persona, not the kind and empathetic persona used for the initial contact, but a stern, matter-of-fact one. The student’s phone buzzes with alerts that talk about scholarships being lost, teachers being notified, etc. The AI anticipates the excuses the student will use and presents evidence tracking the student’s behavior to show they are not sick.
Not so much focused on learning ecosystems, but still worth mentioning:
NVIDIA Digital Human Technologies Bring AI Characters to Life
Leading AI Developers Use Suite of NVIDIA Technologies to Create Lifelike Avatars and Dynamic Characters for Everything From Games to Healthcare, Financial Services and Retail Applications
Today is the beginning of our moonshot to solve embodied AGI in the physical world. I’m so excited to announce Project GR00T, our new initiative to create a general-purpose foundation model for humanoid robot learning.
These librarians, entrepreneurs, lawyers and technologists built the world where artificial intelligence threatens to upend life and law as we know it – and are now at the forefront of the battles raging within.
… To create this first-of-its-kind guide, we cast a wide net with dozens of leaders in this area, took submissions, consulted with some of the most esteemed gurus in legal tech. We also researched the cases most likely to have the biggest impact on AI, unearthing the dozen or so top trial lawyers tapped to lead the battles. Many of them bring copyright or IP backgrounds and more than a few are Bay Area based. Those denoted with an asterisk are members of our Hall of Fame. .
descrybe.ai, a year-old legal research startup focused on using artificial intelligence to provide free and easy access to court opinions, has completed its goal of creating AI-generated summaries of all available state supreme and appellate court opinions from throughout the United States.
descrybe.ai describes its mission as democratizing access to legal information and leveling the playing field in legal research, particularly for smaller-firm lawyers, journalists, and members of the public.
As the FlexOS research study “Generative AI at Work” concluded based on a survey amongst knowledge workers, ChatGPT reigns supreme. … 2. AI Tool Usage is Way Higher Than People Expect – Beating Netflix, Pinterest, Twitch. As measured by data analysis platform Similarweb based on global web traffic tracking, the AI tools in this list generate over 3 billion monthly visits.
With 1.67 billion visits, ChatGPT represents over half of this traffic and is already bigger than Netflix, Microsoft, Pinterest, Twitch, and The New York Times.
Something unusual is happening in America. Demand for electricity, which has stayed largely flat for two decades, has begun to surge.
Over the past year, electric utilities have nearly doubled their forecasts of how much additional power they’ll need by 2028 as they confront an unexpected explosion in the number of data centers, an abrupt resurgence in manufacturing driven by new federal laws, and millions of electric vehicles being plugged in.
The tumult could seem like a distraction from the startup’s seemingly unending march toward AI advancement. But the tension, and the latest debate with Musk, illuminates a central question for OpenAI, along with the tech world at large as it’s increasingly consumed by artificial intelligence: Just how open should an AI company be?
…
The meaning of the word “open” in “OpenAI” seems to be a particular sticking point for both sides — something that you might think sounds, on the surface, pretty clear. But actual definitions are both complex and controversial.
In partnership with the National Cancer Institute, or NCI, researchers from the Department of Energy’s Oak Ridge National Laboratory and Louisiana State University developed a long-sequenced AI transformer capable of processing millions of pathology reports to provide experts researching cancer diagnoses and management with exponentially more accurate information on cancer reporting.
At this moment, as a college student trying to navigate the messy, fast-developing, and varied world of generative AI, I feel more confused than ever. I think most of us can share that feeling. There’s no roadmap on how to use AI in education, and there aren’t the typical years of proof to show something works. However, this promising new tool is sitting in front of us, and we would be foolish to not use it or talk about it.
…
I’ve used it to help me understand sample code I was viewing, rather than mindlessly trying to copy what I was trying to learn from. I’ve also used it to help prepare for a debate, practicing making counterarguments to the points it came up with.
AI alone cannot teach something; there needs to be critical interaction with the responses we are given. However, this is something that is true of any form of education. I could sit in a lecture for hours a week, but if I don’t do the homework or critically engage with the material, I don’t expect to learn anything.
Survey: K-12 Students Want More Guidance on Using AI — from govtech.com by Lauraine Langreo Research from the nonprofit National 4-H Council found that most 9- to 17-year-olds have an idea of what AI is and what it can do, but most would like help from adults in learning how to use different AI tools.
“Preparing young people for the workforce of the future means ensuring that they have a solid understanding of these new technologies that are reshaping our world,” Jill Bramble, the president and CEO of the National 4-H Council, said in a press release.
Text to video via OpenAI’s Sora. (I had taken this screenshot on the 15th, but am posting it now.)
We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.
Introducing Sora, our text-to-video model. Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.
At the University of Pennsylvania, undergraduate students in its school of engineering will soon be able to study for a bachelor of science degree in artificial intelligence.
What can one do with an AI degree? The University of Pennsylvania says students will be able to apply the skills they learn in school to build responsible AI tools, develop materials for emerging chips and hardware, and create AI-driven breakthroughs in healthcare through new antibiotics, among other things.
Google on Monday announced plans to help train people in Europe with skills in artificial intelligence, the latest tech giant to invest in preparing workers and economies amid the disruption brought on by technologies they are racing to develop.
The acceleration of AI deployments has gotten so absurdly out of hand that a draft post I started a week ago about a new development is now out of date.
… The Pace is Out of Control
A mere week since Ultra 1.0’s announcement, Google has now introduced us to Ultra 1.5, a model they are clearly positioning to be the leader in the field. Here is the full technical report for Gemini Ultra 1.5, and what it can do is stunning.
[St. Louis, MO, February 14, 2024] – In a bold move that counters the conventions of more traditional schools, Maryville University has unveiled a substantial $21 million multi-year investment in artificial intelligence (AI) and cutting-edge technologies. This groundbreaking initiative is set to transform the higher education experience to be powered by the latest technology to support student success and a five-star experience for thousands of students both on-campus and online.
In the realm of technological advancements, artificial intelligence (AI) stands out as a beacon of immeasurable potential, yet also as a source of existential angst when considering that AI might already be beyond our ability to control.
His research underscores a chilling truth: our current understanding and control of AI are woefully inadequate, posing a threat that could either lead to unprecedented prosperity or catastrophic extinction.
From DSC: This next item is for actors, actresses, and voiceover specialists:
Turn your voice into passive income. — from elevenlabs.io; via Ben’s Bites Are you a professional voice actor? Sign up and share your voice today to start earning rewards every time it’s used.
The scammers used digitally recreated versions of an international company’s Chief Financial Officer and other employees to order $25 million in money transfers during a video conference call containing just one real person.
The victim, an employee at the Hong Kong branch of an unnamed multinational firm, was duped into taking part in a video conference call in which they were the only real person – the rest of the group were fake representations of real people, writes SCMP.
As we’ve seen in previous incidents where deepfakes were used to recreate someone without their permission, the scammers utilized publicly available video and audio footage to create these digital versions.
Since we launched Bard last year, people all over the world have used it to collaborate with AI in a completely new way — to prepare for job interviews, debug code, brainstorm new business ideas or, as we announced last week, create captivating images.
Our mission with Bard has always been to give you direct access to our AI models, and Gemini represents our most capable family of models. To reflect this, Bard will now simply be known as Gemini.
A new way to discover places with generative AI in Maps— from blog.google by Miriam Daniel; via AI Valley Here’s a look at how we’re bringing generative AI to Maps — rolling out this week to select Local Guides in the U.S.
Today, we’re introducing a new way to discover places with generative AI to help you do just that — no matter how specific, niche or broad your needs might be. Simply say what you’re looking for and our large-language models (LLMs) will analyze Maps’ detailed information about more than 250 million places and trusted insights from our community of over 300 million contributors to quickly make suggestions for where to go.
Starting in the U.S., this early access experiment launches this week to select Local Guides, who are some of the most active and passionate members of the Maps community. Their insights and valuable feedback will help us shape this feature so we can bring it to everyone over time.
Google Prepares for a Future Where Search Isn’t King — from wired.com by Lauren Goode CEO Sundar Pichai tells WIRED that Google’s new, more powerful Gemini chatbot is an experiment in offering users a way to get things done without a search engine. It’s also a direct shot at ChatGPT.
OpenAI on Thursday announced its first partnership with a higher education institution.
Starting in February, Arizona State University will have full access to ChatGPT Enterprise and plans to use it for coursework, tutoring, research and more.
The partnership has been in the works for at least six months.
ASU plans to build a personalized AI tutor for students, allow students to create AI avatars for study help and broaden the university’s prompt engineering course.
The collaboration between ASU and OpenAI brings the advanced capabilities of ChatGPT Enterprise into higher education, setting a new precedent for how universities enhance learning, creativity and student outcomes.
“ASU recognizes that augmented and artificial intelligence systems are here to stay, and we are optimistic about their ability to become incredible tools that help students to learn, learn more quickly and understand subjects more thoroughly,” ASU President Michael M. Crow said. “Our collaboration with OpenAI reflects our philosophy and our commitment to participating directly to the responsible evolution of AI learning technologies.”
AI <> Academia — from drphilippahardman.substack.com by Dr. Philippa Hardman What might emerge from ASU’s pioneering partnership with OpenAI?
Phil’s Wish List #2: Smart Curriculum Development ChatGPT assists in creating and updating course curricula, based on both student data and emerging domain and pedagogical research on the topic.
Output: using AI it will be possible to review course content and make data-informed automate recommendations based on latest pedagogical and domain-specific research
Potential Impact: increased dynamism and relevance in course content and reduced administrative lift for academics.
Unlike traditional leadership, adaptable leadership is not bound by rigid rules and protocols. Instead, it thrives on flexibility. Adaptable leaders are willing to experiment, make course corrections, and pivot when necessary. Adaptable leadership is about flexibility, resilience and a willingness to embrace change. It embodies several key principles that redefine the role of leaders in organizations:
Embracing uncertainty
Adaptable leaders understand that uncertainty is the new norm. They do not shy away from ambiguity but instead, see it as an opportunity for growth and innovation. They encourage a culture of experimentation and learning from failure.
Empowering teams
Instead of dictating every move, adaptable leaders empower their teams to take ownership of their work. They foster an environment of trust and collaboration, enabling individuals to contribute their unique perspectives and skills.
Continuous learning
Adaptable leaders are lifelong learners. They are constantly seeking new knowledge, stay informed about industry trends and encourage their teams to do the same. They understand that knowledge is a dynamic asset that must be constantly updated.
Major AI in Education Related Developments this week — from stefanbauschard.substack.com by Stefan Bauschard ASU integrates with ChatGPT, K-12 AI integrations, Agents & the Rabbit, Uruguay, Meta and AGI, Rethinking curriculum
“The greatest risk is leaving school curriculum unchanged when the entire world is changing.” Hadi Partovi, founder Code.org, Angel investor in Facebook, DropBox, AirBnb, Uber
Tutorbots in college. On a more limited scale, Georgia State University, Morgan State University, and the University of Central Florida are piloting a project using chatbots to support students in foundational math and English courses.
Pioneering AI-Driven Instructional Design in Small College Settings — from campustechnology.com by Gopu Kiron For institutions that lack the budget or staff expertise to utilize instructional design principles in online course development, generative AI may offer a way forward.
Unfortunately, smaller colleges — arguably the institutions whose students are likely to benefit the most from ID enhancements — frequently find themselves excluded from authentically engaging in the ID arena due to tight budgets, limited faculty online course design expertise, and the lack of ID-specific staff roles. Despite this, recent developments in generative AI may offer these institutions a low-cost, tactical avenue to compete with more established players.
There’s a new AI from Google DeepMind called AlphaGeometry that totally nails solving super hard geometry problems. We’re talking problems so tough only math geniuses who compete in the International Mathematical Olympiad can figure them out.