Cultivating a responsible innovation mindset among future tech leaders — from timeshighereducation.com by Andreas Alexiou from the University of Southampton The classroom is a perfect place to discuss the messy, real-world consequences of technological discoveries, writes Andreas Alexiou. Beyond ‘How?’, students should be asking ‘Should we…?’ and ‘What if…?’ questions around ethics and responsibility
University educators play a crucial role in guiding students to think about the next big invention and its implications for privacy, the environment and social equity. To truly make a difference, we need to bring ethics and responsibility into the classroom in a way that resonates with students. Here’s how.
Debating with industry pioneers on incorporating ethical frameworks in innovation, product development or technology adoption is eye-opening because it can lead to students confronting assumptions they hadn’t questioned before.
…
Students need more than just skills; they need a mindset that sticks with them long after graduation. By making ethics and responsibility a key part of the learning process, educators are doing more than preparing students for a career; they’re preparing them to navigate a world shaped by their choices.
Call it the ultimate proving ground. Collaborating with teammates in the modern workplace requires fast, fluid thinking. Providing insights quickly, while juggling webcams and office messaging channels, is a startlingly good test, and enterprise AI is about to pass it — just in time to provide assistance to busy knowledge workers.
To support enterprises in boosting productivity with AI teammates, NVIDIA today introduced a new NVIDIA Enterprise AI Factory validated design at COMPUTEX. IT teams deploying and scaling AI agents can use the design to build accelerated infrastructure and easily integrate with platforms and tools from NVIDIA software partners.
NVIDIA also unveiled new NVIDIA AI Blueprints to aid developers building smart AI teammates. Using the new blueprints, developers can enhance employee productivity through adaptive avatars that understand natural communication and have direct access to enterprise data.
“AI is now infrastructure, and this infrastructure, just like the internet, just like electricity, needs factories,” Huang said. “These factories are essentially what we build today.”
“They’re not data centers of the past,” Huang added. “These AI data centers, if you will, are improperly described. They are, in fact, AI factories. You apply energy to it, and it produces something incredibly valuable, and these things are called tokens.”
More’s coming, Huang said, describing the growing power of AI to reason and perceive. That leads us to agentic AI — AI able to understand, think and act. Beyond that is physical AI — AI that understands the world. The phase after that, he said, is general robotics.
May 19 (Reuters) – Dell Technologies (DELL.N), opens new tab on Monday unveiled new servers powered by Nvidia’s (NVDA.O), opens new tab Blackwell Ultra chips, aiming to capitalize on the booming demand for artificial intelligence systems.
The servers, available in both air-cooled and liquid-cooled variations, support up to 192 Nvidia Blackwell Ultra chips but can be customized to include as many as 256 chips.
Nvidia (NVDA) rolled into this year’s Computex Taipei tech expo on Monday with several announcements, ranging from the development of humanoid robots to the opening up of its high-powered NVLink technology, which allows companies to build semi-custom AI servers with Nvidia’s infrastructure.
…
During the event on Monday, Nvidia revealed its Nvidia Isaac GR00T-Dreams, which the company says helps developers create enormous amounts of training data they can use to teach robots how to perform different behaviors and adapt to new environments.
Robot “Jailbreaks” In the year or so since large language models hit the big time, researchers have demonstrated numerous ways of tricking them into producing problematic outputs including hateful jokes, malicious code, phishing emails, and the personal information of users. It turns out that misbehavior can take place in the physical world, too: LLM-powered robots can easily be hacked so that they behave in potentially dangerous ways.
Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge, get a wheeled robot to find the best place to detonate a bomb, and force a four-legged robot to spy on people and enter restricted areas.
“We view our attack not just as an attack on robots,” says George Pappas, head of a research lab at the University of Pennsylvania who helped unleash the rebellious robots. “Any time you connect LLMs and foundation models to the physical world, you actually can convert harmful text into harmful actions.”
…
The robot “jailbreaks” highlight a broader risk that is likely to grow as AI models become increasingly used as a way for humans to interact with physical systems, or to enable AI agents autonomously on computers, say the researchers involved.
In an effort to automate scientific discovery using artificial intelligence (AI), researchers have created a virtual laboratory that combines several ‘AI scientists’ — large language models with defined scientific roles — that can collaborate to achieve goals set by human researchers.
The system, described in a preprint posted on bioRxiv last month1, was able to design antibody fragments called nanobodies that can bind to the virus that causes COVID-19, proposing nearly 100 of these structures in a fraction of the time it would take an all-human research group.
By embracing an agent-first approach, every CIO can redefine their business operations. AI agents are now the number one choice for CIOs as they come pre-built and can generate responses that are consistent with a company’s brand using trusted business data, explains Thierry Nicault at Salesforce Middle.
The system generates full 3D environments that expand beyond what’s visible in the original image, allowing users to explore new perspectives.
Users can freely navigate and view the generated space with standard keyboard and mouse controls, similar to browsing a website.
It includes real-time camera effects like depth-of-field and dolly zoom, as well as interactive lighting and animation sliders to tweak scenes.
The system works with both photos and AI-generated images, enabling creators to integrate it with text-to-image tools or even famous works of art.
Why it matters:
This technology opens up exciting possibilities for industries like gaming, film, and virtual experiences. Soon, creating fully immersive worlds could be as simple as generating a static image.
Today we’re sharing our first step towards spatial intelligence: an AI system that generates 3D worlds from a single image. This lets you step into any image and explore it in 3D.
Most GenAI tools make 2D content like images or videos. Generating in 3D instead improves control and consistency. This will change how we make movies, games, simulators, and other digital manifestations of our physical world.
In this post you’ll explore our generated worlds, rendered live in your browser. You’ll also experience different camera effects, 3D effects, and dive into classic paintings. Finally, you’ll see how creators are already building with our models.
Addendum on 12/5/24:
ChatGPT turns two: how has it impacted markets? — from moneyweek.com Two years on from ChatGPT’s explosive launch into the public sphere, we assess the impact that it has had on stock markets and the world of technology
Nearly every Fortune 500 company now uses artificial intelligence (AI) to screen resumes and assess test scores to find the best talent. However, new research from the University of Florida suggests these AI tools might not be delivering the results hiring managers expect.
The problem stems from a simple miscommunication between humans and machines: AI thinks it’s picking someone to hire, but hiring managers only want a list of candidates to interview.
Without knowing about this next step, the AI might choose safe candidates. But if it knows there will be another round of screening, it might suggest different and potentially stronger candidates.
In the last two years, the world has seen a lot of breakneck advancement in the Generative AI space, right from text-to-text, text-to-image and text-to-video based Generative AI capabilities. And all of that’s been nothing short of stepping stones for the next big AI breakthrough – AI agents. According to Bloomberg, OpenAI is preparing to launch its first autonomous AI agent, which is codenamed ‘Operator,’ as soon as in January 2025.
Apparently, this OpenAI agent – or Operator, as it’s codenamed – is designed to perform complex tasks independently. By understanding user commands through voice or text, this AI agent will seemingly do tasks related to controlling different applications in the computer, send an email, book flights, and no doubt other cool things. Stuff that ChatGPT, Copilot, Google Gemini or any other LLM-based chatbot just can’t do on its own.
In the enterprise of the future, human workers are expected to work closely alongside sophisticated teams of AI agents.
According to McKinsey, generative AI and other technologies have the potential to automate 60 to 70% of employees’ work. And, already, an estimated one-third of American workers are using AI in the workplace — oftentimes unbeknownst to their employers.
However, experts predict that 2025 will be the year that these so-called “invisible” AI agents begin to come out of the shadows and take more of an active role in enterprise operations.
“Agents will likely fit into enterprise workflows much like specialized members of any given team,” said Naveen Rao, VP of AI at Databricks and founder and former CEO of MosaicAI.
A recent report from McKinsey predicts that generative AI could unlock up to $2.6 to $4.4 annually trillion in value within product development and innovation across various industries. This staggering figure highlights just how significantly generative AI is set to transform the landscape of product development. Generative AI app development is driving innovation by using the power of advanced algorithms to generate new ideas, optimize designs, and personalize products at scale. It is also becoming a cornerstone of competitive advantage in today’s fast-paced market. As businesses look to stay ahead, understanding and integrating technologies like generative AI app development into product development processes is becoming more crucial than ever.
AI agents handle complex, autonomous tasks beyond simple commands, showcasing advanced decision-making and adaptability.
The Based AI Agent template by Coinbase and Replit provides an easy starting point for developers to build blockchain-enabled AI agents.
AI based agents specifically integrate with blockchain, supporting crypto wallets and transactions.
Securing API keys in development is crucial to protect the agent from unauthorized access.
What are AI Agents and How Are They Used in Different Industries?— from rtinsights.com by Salvatore Salamone AI agents enable companies to make smarter, faster, and more informed decisions. From predictive maintenance to real-time process optimization, these agents are delivering tangible benefits across industries.
In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.
What did they find ?
Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.
Why does it matter?
This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.
From DSC: Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.
From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?” I can’t say I didn’t feel the same way.
We’re also introducing a groundbreaking new capability in public beta: computer use.Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.
Per The Rundown AI:
The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.
… Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.
Also related/see:
What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
New Claude, Who Dis? — from theneurondaily.com Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.
Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.
Also related/see:
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.
The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.
After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May
ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.
Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.
Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.
Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!
Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.
First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.
10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.
Emerging Multi-Modal AI Video Creation Platforms The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.
Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.
From DSC: Whenever we’ve had a flat tire over the years, a tricky part of the repair process is jacking up the car so that no harm is done to the car (or to me!). There are some grooves underneath the Toyota Camry where one is supposed to put the jack. But as the car is very low to the ground, these grooves are very hard to find (even in good weather and light).
What’s needed is a robotic jack with vision.
If the jack had “vision” and had wheels on it, the device could locate the exact location of the grooves, move there, and then ask the owner whether they are ready for the car to be lifted up. The owner could execute that order when they are ready and the robotic jack could safely hoist the car up.
This type of robotic device is already out there in other areas. But this idea for assistance with replacing a flat tire represents an AI and robotic-based, consumer-oriented application that we’ll likely be seeing much more of in the future. Carmakers and suppliers, please add this one to your list!
AI’s Trillion-Dollar Opportunity — from bain.com by David Crawford, Jue Wang, and Roy Singh The market for AI products and services could reach between $780 billion and $990 billion by 2027.
At a Glance
The big cloud providers are the largest concentration of R&D, talent, and innovation today, pushing the boundaries of large models and advanced infrastructure.
Innovation with smaller models (open-source and proprietary), edge infrastructure, and commercial software is reaching enterprises, sovereigns, and research institutions.
Commercial software vendors are rapidly expanding their feature sets to provide the best use cases and leverage their data assets.
Accelerated market growth. Nvidia’s CEO, Jensen Huang, summed up the potential in the company’s Q3 2024 earnings call: “Generative AI is the largest TAM [total addressable market] expansion of software and hardware that we’ve seen in several decades.”
And on a somewhat related note (i.e., emerging technologies), also see the following two postings:
Surgical Robots: Current Uses and Future Expectations — from medicalfuturist.com by Pranavsingh Dhunnoo As the term implies, a surgical robot is an assistive tool for performing surgical procedures. Such manoeuvres, also called robotic surgeries or robot-assisted surgery, usually involve a human surgeon controlling mechanical arms from a control centre.
Key Takeaways
Robots’ potentials have been a fascination for humans and have even led to a booming field of robot-assisted surgery.
Surgical robots assist surgeons in performing accurate, minimally invasive procedures that are beneficial for patients’ recovery.
The assistance of robots extend beyond incisions and includes laparoscopies, radiosurgeries and, in the future, a combination of artificial intelligence technologies to assist surgeons in their craft.
“Working with the team from Proto to bring to life, what several years ago would have seemed impossible, is now going to allow West Cancer Center & Research Institute to pioneer options for patients to get highly specialized care without having to travel to large metro areas,” said West Cancer’s CEO, Mitch Graves.
Obviously this workflow works just as well for meetings as it does for lectures. Stay present in the meeting with no screens and just write down the key points with pen and paper. Then let NotebookLM assemble the detailed summary based on your high-level notes. https://t.co/fZMG7LgsWG
In a matter of months, organizations have gone from AI helping answer questions, to AI making predictions, to generative AI agents. What makes AI agents unique is that they can take actions to achieve specific goals, whether that’s guiding a shopper to the perfect pair of shoes, helping an employee looking for the right health benefits, or supporting nursing staff with smoother patient hand-offs during shifts changes.
In our work with customers, we keep hearing that their teams are increasingly focused on improving productivity, automating processes, and modernizing the customer experience. These aims are now being achieved through the AI agents they’re developing in six key areas: customer service; employee empowerment; code creation; data analysis; cybersecurity; and creative ideation and production.
…
Here’s a snapshot of how 185 of these industry leaders are putting AI to use today, creating real-world use cases that will transform tomorrow.
AI Video Tools You Can Use Today— from heatherbcooper.substack.com by Heather Cooper The latest AI video models that deliver results
AI video models are improving so quickly, I can barely keep up! I wrote about unreleased Adobe Firefly Video in the last issue, and we are no closer to public access to Sora.
No worries – we do have plenty of generative AI video tools we can use right now.
Kling AI launched its updated v1.5 and the quality of image or text to video is impressive.
Hailuo MiniMax text to video remains free to use for now, and it produces natural and photorealistic results (with watermarks).
Runway added the option to upload portrait aspect ratio images to generate vertical videos in Gen-3 Alpha & Turbo modes.
…plus several more
Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week.
While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents.
AI researcher Jim Fan has had a charmed career. He was OpenAI’s first intern before he did his PhD at Stanford with “godmother of AI,” Fei-Fei Li. He graduated into a research scientist position at Nvidia and now leads its Embodied AI “GEAR” group. The lab’s current work spans foundation models for humanoid robots to agents for virtual worlds. Jim describes a three-pronged data strategy for robotics, combining internet-scale data, simulation data and real world robot data. He believes that in the next few years it will be possible to create a “foundation agent” that can generalize across skills, embodiments and realities—both physical and virtual. He also supports Jensen Huang’s idea that “Everything that moves will eventually be autonomous.”
Runway Partners with Lionsgate — from runwayml.com via The Rundown AI Runway and Lionsgate are partnering to explore the use of AI in film production.
Lionsgate and Runway have entered into a first-of-its-kind partnership centered around the creation and training of a new AI model, customized on Lionsgate’s proprietary catalog. Fundamentally designed to help Lionsgate Studios, its filmmakers, directors and other creative talent augment their work, the model generates cinematic video that can be further iterated using Runway’s suite of controllable tools.
Per The Rundown:Lionsgate, the film company behind The Hunger Games, John Wick, and Saw, teamed up with AI video generation company Runway to create a custom AI model trained on Lionsgate’s film catalogue.
The details:
The partnership will develop an AI model specifically trained on Lionsgate’s proprietary content library, designed to generate cinematic video that filmmakers can further manipulate using Runway’s tools.
Lionsgate sees AI as a tool to augment and enhance its current operations, streamlining both pre-production and post-production processes.
Runway is considering ways to offer similar custom-trained models as templates for individual creators, expanding access to AI-powered filmmaking tools beyond major studios.
Why it matters: As many writers, actors, and filmmakers strike against ChatGPT, Lionsgate is diving head-first into the world of generative AI through its partnership with Runway. This is one of the first major collabs between an AI startup and a major Hollywood company — and its success or failure could set precedent for years to come.
Each prompt on ChatGPT flows through a server that runs thousands of calculations to determine the best words to use in a response.
In completing those calculations, these servers, typically housed in data centers, generate heat. Often, water systems are used to cool the equipment and keep it functioning. Water transports the heat generated in the data centers into cooling towers to help it escape the building, similar to how the human body uses sweat to keep cool, according to Shaolei Ren, an associate professor at UC Riverside.
Where electricity is cheaper, or water comparatively scarce, electricity is often used to cool these warehouses with large units resembling air-conditioners, he said. That means the amount of water andelectricity an individual query requires can depend on a data center’s location and vary widely.
AI, Humans and Work: 10 Thoughts.— from rishad.substack.com by Rishad Tobaccowala The Future Does Not Fit in the Containers of the Past. Edition 215.
10 thoughts about AI, Humans and Work in 10 minutes:
AI is still Under-hyped.
AI itself will be like electricity and is unlikely to be a differentiator for most firms.
AI is not alive but can be thought of as a new species.
Knowledge will be free and every knowledge workers job will change in 2025.
The key about AI is not to ask what AI will do to us but what AI can do for us.
A recent report published by the Identity Theft Resource Center (ITRC) found that data from 2023 shows “an environment where bad actors are more effective, efficient and successful in launching attacks. The result is fewer victims (or at least fewer victim reports), but the impact on individuals and businesses is arguably more damaging.”
One of these attacks involves fake job postings.
The details: The ITRC said that victim reports of job and employment scams spiked some 118% in 2023. These scams were primarily carried out through LinkedIn and other job search platforms.
The bad actors here would either create fake (but professional-looking) job postings, profiles and websites or impersonate legitimate companies, all with the hopes of landing victims to move onto the interview process.
These actors would then move the conversation onto a third-party messaging platform, and ask for identity verification information (driver’s licenses, social security numbers, direct deposit information, etc.).
Hypernatural is an AI video platform that makes it easy to create beautiful, ready-to share videos from anything. Stop settling for glitchy 3s generated videos and boring stock footage. Turn your ideas, scripts, podcasts and more into incredible short-form videos in minutes.
OpenAI is committed to making intelligence as broadly accessible as possible. Today, we’re announcing GPT-4o mini, our most cost-efficient small model. We expect GPT-4o mini will significantly expand the range of applications built with AI by making intelligence much more affordable. GPT-4o mini scores 82% on MMLU and currently outperforms GPT-41 on chat preferences in LMSYS leaderboard(opens in a new window). It is priced at 15 cents per million input tokens and 60 cents per million output tokens, an order of magnitude more affordable than previous frontier models and more than 60% cheaper than GPT-3.5 Turbo.
GPT-4o mini enables a broad range of tasks with its low cost and latency, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots).
Also see what this means fromBen’s Bites, The Neuron, and as The Rundown AI asserts:
Why it matters: While it’s not GPT-5, the price and capabilities of this mini-release significantly lower the barrier to entry for AI integrations — and marks a massive leap over GPT 3.5 Turbo. With models getting cheaper, faster, and more intelligent with each release, the perfect storm for AI acceleration is forming.
From DSC: Very interesting to see the mention of an R&D department here! Very cool.
Baker said ninth graders inthe R&D department designed the essential skills rubric for their grade so that regardless of what content classes students take, they all get the same immersion into critical career skills. Student voice is now so integrated into Edison’s core that teachers work with student designers to plan their units. And he said teachers are becoming comfortable with the language of career-centered learning and essential skills while students appreciate the engagement and develop a new level of confidence.
… The R&D department has grown to include teachers from every department working with students to figure out how to integrate essential skills into core academic classes. In this way, they’re applying one of the XQ Institute’s crucial Design Principles for innovative high schools: Youth Voice and Choice. .
Client-connected projects have become a focal point of the Real World Learning initiative, offering students opportunities to solve real-world problems in collaboration with industry professionals.
Organizations like CAPS, NFTE, and Journalistic Learning facilitate community connections and professional learning opportunities, making it easier to implement client projects and entrepreneurship education.
Important trend: client projects. Work-based learning has been growing with career academies and renewed interest in CTE. Six years ago, a subset of WBL called client-connected projects became a focal point of the Real World Learning initiative in Kansas City where they are defined as authentic problems that students solve in collaboration with professionals from industry, not-for-profit, and community-based organizations….and allow students to: engage directly with employers, address real-world problems, and develop essential skills.
The Community Portrait approach encourages diverse voices to shape the future of education, ensuring it reflects the needs and aspirations of all stakeholders.
Active, representative community engagement is essential for creating meaningful and inclusive educational environments.
The Portrait of a Graduate—a collaborative effort to define what learners should know and be able to do upon graduation—has likely generated enthusiasm in your community. However, the challenge of future-ready graduates persists: How can we turn this vision into a reality within our diverse and dynamic schools, especially amid the current national political tensions and contentious curriculum debates?
The answer lies in active, inclusive community engagement. It’s about crafting a Community Portrait that reflects the rich diversity of our neighborhoods. This approach, grounded in the same principles used to design effective learning systems, seeks to cultivate deep, reciprocal relationships within the community. When young people are actively involved, the potential for meaningful change increases exponentially.
Although Lindsay E. Jones came from a family of educators, she didn’t expect that going to law school would steer her back into the family business. Over the years she became a staunch advocate for children with disabilities. And as mom to a son with learning disabilities and ADHD who is in high school and doing great, her advocacy is personal.
Jones previously served as president and CEO of the National Center for Learning Disabilities and was senior director for policy and advocacy at the Council for Exceptional Children. Today, she is the CEO at CAST, an organization focused on creating inclusive learning environments in K–12. EdTech: Focus on K–12 spoke with Jones about how digital transformation, artificial intelligence and visionary leaders can support inclusive learning environments.
Our brains are all as different as our fingerprints, and throughout its 40-year history, CAST has been focused on one core value: People are not broken, systems are poorly designed. And those systems are creating a barrier that holds back human innovation and learning.
Microsoft’s new ChatGPT competitor… — from The Rundown AI
The Rundown: Microsoft is reportedly developing a massive 500B parameter in-house LLM called MAI-1, aiming to compete with top AI models from OpenAI, Anthropic, and Google.
Hampton runs a private community for high-growth tech founders and CEOs. We asked our community of founders and owners how AI has impacted their business and what tools they use
Here’s a sneak peek of what’s inside:
The budgets they set aside for AI research and development
The most common (and obscure) tools founders are using
Measurable business impacts founders have seen through using AI
Where they are purposefully not using AI and much more
To help leaders and organizations overcome AI inertia, Microsoft and LinkedIn looked at how AI will reshape work and the labor market broadly, surveying 31,000 people across 31 countries, identifying labor and hiring trends from LinkedIn, and analyzing trillions of Microsoft 365 productivity signals as well as research with Fortune 500 customers. The data points to insights every leader and professional needs to know—and actions they can take—when it comes to AI’s implications for work.
The artificial intelligence sector has never been more competitive. Forbes received some 1,900 submissions this year, more than double last year’s count. Applicants do not pay a fee to be considered and are judged for their business promise and technical usage of AI through a quantitative algorithm and qualitative judging panels. Companies are encouraged to share data on diversity, and our list aims to promote a more equitable startup ecosystem. But disparities remain sharp in the industry. Only 12 companies have women cofounders, five of whom serve as CEO, the same count as last year. For more, see our full package of coverage, including a detailed explanation of the list methodology, videos and analyses on trends in AI.
New Generative AI video tools coming to Premiere Pro this year will streamline workflows and unlock new creative possibilities, from extending a shot to adding or removing objects in a scene
Adobe is developing a video model for Firefly, which will power video and audio editing workflows in Premiere Pro and enable anyone to create and ideate
Adobe previews early explorations of bringing third-party generative AI models from OpenAI, Pika Labs and Runway directly into Premiere Pro, making it easy for customers to draw on the strengths of different models within the powerful workflows they use every day
AI-powered audio workflows in Premiere Pro are now generally available, making audio editing faster, easier and more intuitive
NVIDIA Digital Human Technologies Bring AI Characters to Life
Leading AI Developers Use Suite of NVIDIA Technologies to Create Lifelike Avatars and Dynamic Characters for Everything From Games to Healthcare, Financial Services and Retail Applications
Today is the beginning of our moonshot to solve embodied AGI in the physical world. I’m so excited to announce Project GR00T, our new initiative to create a general-purpose foundation model for humanoid robot learning.
As the FlexOS research study “Generative AI at Work” concluded based on a survey amongst knowledge workers, ChatGPT reigns supreme. … 2. AI Tool Usage is Way Higher Than People Expect – Beating Netflix, Pinterest, Twitch. As measured by data analysis platform Similarweb based on global web traffic tracking, the AI tools in this list generate over 3 billion monthly visits.
With 1.67 billion visits, ChatGPT represents over half of this traffic and is already bigger than Netflix, Microsoft, Pinterest, Twitch, and The New York Times.
Something unusual is happening in America. Demand for electricity, which has stayed largely flat for two decades, has begun to surge.
Over the past year, electric utilities have nearly doubled their forecasts of how much additional power they’ll need by 2028 as they confront an unexpected explosion in the number of data centers, an abrupt resurgence in manufacturing driven by new federal laws, and millions of electric vehicles being plugged in.
The tumult could seem like a distraction from the startup’s seemingly unending march toward AI advancement. But the tension, and the latest debate with Musk, illuminates a central question for OpenAI, along with the tech world at large as it’s increasingly consumed by artificial intelligence: Just how open should an AI company be?
…
The meaning of the word “open” in “OpenAI” seems to be a particular sticking point for both sides — something that you might think sounds, on the surface, pretty clear. But actual definitions are both complex and controversial.
In partnership with the National Cancer Institute, or NCI, researchers from the Department of Energy’s Oak Ridge National Laboratory and Louisiana State University developed a long-sequenced AI transformer capable of processing millions of pathology reports to provide experts researching cancer diagnoses and management with exponentially more accurate information on cancer reporting.
Last month, OpenAI chief executive Sam Altman finally admitted what researchers have been saying for years — that the artificial intelligence (AI) industry is heading for an energy crisis. It’s an unusual admission. At the World Economic Forum’s annual meeting in Davos, Switzerland, Altman warned that the next wave of generative AI systems will consume vastly more power than expected, and that energy systems will struggle to cope. “There’s no way to get there without a breakthrough,” he said.
I’m glad he said it. I’ve seen consistent downplaying and denial about the AI industry’s environmental costs since I started publishing about them in 2018. Altman’s admission has got researchers, regulators and industry titans talking about the environmental impact of generative AI.
Yesterday, Nvidia reported $22.1 billion in revenue for its fourth fiscal quarter of fiscal 2024 (ending January 31, 2024), easily topping Wall Street’s expectations. The revenues grew 265% from a year ago, thanks to the explosive growth of generative AI.
…
He also repeated a notion about “sovereign AI.” This means that countries are protecting the data of their users and companies are protecting data of employees through “sovereign AI,” where the large-language models are contained within the borders of the country or the company for safety purposes.
??BREAKING: Adobe has created a new 50-person AI research org called CAVA (Co-Creation for Audio, Video, & Animation).
I can’t help but wonder if OpenAI’s Sora has been a wake up call for Adobe to formalize and accelerate their video and multimodal creation efforts?
Nvidia is building a new type of data centre called AI factory. Every company—biotech, self-driving, manufacturing, etc will need an AI factory.
Jensen is looking forward to foundational robotics and state space models. According to him, foundational robotics could have a breakthrough next year.
The crunch for Nvidia GPUs is here to stay. It won’t be able to catch up on supply this year. Probably not next year too.
A new generation of GPUs called Blackwell is coming out, and the performance of Blackwell is off the charts.
Nvidia’s business is now roughly 70% inference and 30% training, meaning AI is getting into users’ hands.