Kids representing a broad range of special populations have a strong presence in today’s microschooling movement. Children with neurodiversities, other special needs, and those coming to microschools at two or more grades below “grade level mastery” as defined by their state all are served by more than 50 percent of microschools surveyed nationally, according to the Center’s 2024 American Microschools Sector Analysis report.
Children who have experienced emotional trauma or have experienced housing or food insecurity are also being served widely in microschools, according to leaders surveyed nationally.
This won’t come as a surprise to most in the microschooling movement. But to those who are less familiar, understanding the many ways that microschooling is about thriving for families and children who have struggled in their prior schooling settings. .
The number of colleges that close each year is poised to significantly increase as schools contend with a slowdown in prospective students.
That’s the finding of a new working paper published by the Federal Reserve Bank of Philadelphia, where researchers created predictive models of schools’ financial distress using metrics like enrollment and staffing patterns, sources of revenue and liquidity data. They overlayed those models with simulations to estimate the likely increase of future closures.
Excerpt from the working paper:
We document a high degree of missing data among colleges that eventually close and show that this is a key impediment to identifying at risk institutions. We then show that modern machine learning techniques, combined with richer data, are far more effective at predicting college closures than linear probability models, and considerably more effective than existing accountability metrics. Our preferred model, which combines an off-the-shelf machine learning algorithm with the richest set of explanatory variables, can significantly improve predictive accuracy even for institutions with complete data, but is particularly helpful for predicting instances of financial distress for institutions with spotty data.
From DSC: Questions that come to my mind here include:
Shouldn’t the public — especially those relevant parents and students — be made more aware of these types of papers and reports? .
How would any of us like finishing up 1-3 years of school and then being told that our colleges or universities were closing, effective immediately? (This has happened many times already.) and with the demographic cliff starting to hit higher education, this will happen even more now. . Adding insult to injury…when we transfer to different institutions, we’re told that many of our prior credits don’t transfer — thus adding a significant amount to the overall cost of obtaining our degrees. .
Would we not be absolutely furious to discover such communications from our prior — and new — colleges and universities? .
Will all of these types of closures move more people tothis vision here?
Relevant excerpts from Ray Schroeder’s recent articles out at insidehighered.com:
A number of factors are converging to create a huge storm. Generative AI advances, massive federal policy shifts, broad societal and economic changes, and the demographic cliff combine to create uncertainty today and change tomorrow.
The anticipated enrollment cliff, reductions in federal and state funding, increased inflation, and dwindling public support for tuition increases will combine to put even greater pressure on university budgets.
On the positive side of things, the completion rates have been getting better:
National college completion rate ticks up to 61.1% — from highereddive.com by Natalie Schwartz Those who started at two-year public colleges helped drive the overall increase in students completing a credential.
Dive Brief:
Completion rates ticked up to 61.1% for students who entered college in fall 2018, a 0.5 percentage-point increase compared to the previous cohort, according to data released Wednesday by the National Student Clearinghouse Research Center.
The increase marks the highest six-year completion rate since 2007 when the clearinghouse began tracking the data. The growth was driven by fewer students stopping out of college, as well as completion gains among students who started at public two-year colleges.
“Higher completion rates are welcome news for colleges and universities still struggling to regain enrollment levels from before the pandemic,” Doug Shapiro, the research center’s executive director, said in a statement dated Wednesday.
The stakes are huge, because the concern is that maybe the social contract between students and professors is kind of breaking down. Do students believe that all this college lecturing is worth hearing? Or, will this moment force a change in the way college teaching is done?
When it comes to classroom edtech use, digital tools have a drastically different impact when they are used actively instead of passively–a critical difference examined in the 2023-2024 Speak Up Research by Project Tomorrow.
Students also outlined their ideal active learning technologies:
Client expectations have shifted significantly in today’s technology-driven world. Quick communication and greater transparency are now a priority for clients throughout the entire case life cycle. This growing demand for tech-enhanced processes comes not only from clients but also from staff, and is set to rise even further as more advances become available.
…
I see the shift to cloud-based digital systems, especially for small and midsized law firms, as evening the playing field by providing access to robust tools that can aid legal services. Here are some examples of how legal professionals are leveraging tech every day…
Just 10% of law firms and 21% of corporate legal teams have now implemented policies to guide their organisation’s use of generative AI, according to a report out today (2 December) from Thomson Reuters.
Artificial Intelligence (AI) has been rapidly deployed around the world in a growing number of sectors, offering unprecedented opportunities while raising profound legal and ethical questions. This symposium will explore the transformative power of AI, focusing on its benefits, limitations, and the legal challenges it poses.
AI’s ability to revolutionize sectors such as healthcare, law, and business holds immense potential, from improving efficiency and access to services, to providing new tools for analysis and decision-making. However, the deployment of AI also introduces significant risks, including bias, privacy concerns, and ethical dilemmas that challenge existing legal and regulatory frameworks. As AI technologies continue to evolve, it is crucial to assess their implications critically to ensure responsible and equitable development.
The role of legal teams in creating AI ethics guardrails — from legaldive.com by Catherine Dawson For organizations to balance the benefits of artificial intelligence with its risk, it’s important for counsel to develop policy on data governance and privacy.
How Legal Aid and Tech Collaboration Can Bridge the Justice Gap — from law.com by Kelli Raker and Maya Markovich “Technology, when thoughtfully developed and implemented, has the potential to expand access to legal services significantly,” write Kelli Raker and Maya Markovich.
Challenges and Concerns Despite the potential benefits, legal aid organizations face several hurdles in working with new technologies:
1. Funding and incentives: Most funding for legal aid is tied to direct legal representation, leaving little room for investment in general case management or exploration of innovative service delivery methods to exponentially scale impact.
2. Jurisdictional inconsistency: The lack of a unified court system or standardized forms across regions makes it challenging to develop accurate and widely applicable tech solutions in certain types of matters.
3. Organizational capacity: Many legal aid organizations lack the time and resources to thoroughly evaluate new tech offerings or collaboration opportunities or identify internal workflows and areas of unmet need with the highest chance for impact.
4. Data privacy and security: Legal aid providers need assurance that tech protects client data and avoids misuse of sensitive information.
5. Ethical considerations: There’s significant concern about the accuracy of information produced by consumer-facing technology and the potential for inadvertent unauthorized practice of law.
2024: The State of Generative AI in the Enterprise — from menlovc.com (Menlo Ventures) The enterprise AI landscape is being rewritten in real time. As pilots give way to production, we surveyed 600 U.S. enterprise IT decision-makers to reveal the emerging winners and losers.
This spike in spending reflects a wave of organizational optimism; 72% of decision-makers anticipate broader adoption of generative AI tools in the near future. This confidence isn’t just speculative—generative AI tools are already deeply embedded in the daily work of professionals, from programmers to healthcare providers.
Despite this positive outlook and increasing investment, many decision-makers are still figuring out what will and won’t work for their businesses. More than a third of our survey respondents do not have a clear vision for how generative AI will be implemented across their organizations. This doesn’t mean they’re investing without direction; it simply underscores that we’re still in the early stages of a large-scale transformation. Enterprise leaders are just beginning to grasp the profound impact generative AI will have on their organizations.
Business spending on generative AI surged 500% this year, hitting $13.8 billion — up from just $2.3 billion in 2023, according to data from Menlo Ventures released Wednesday.
OpenAI ceded market share in enterprise AI, declining from 50% to 34%, per the report.
Amazon-backed Anthropic doubled its market share from 12% to 24%.
Microsoft has quietly built the largest enterprise AI agent ecosystem, with over 100,000 organizations creating or editing AI agents through its Copilot Studio since launch – a milestone that positions the company ahead in one of enterprise tech’s most closely watched and exciting segments.
…
The rapid adoption comes as Microsoft significantly expands its agent capabilities. At its Ignite conference [that started on 11/19/24], the company announced it will allow enterprises to use any of the 1,800 large language models (LLMs) in the Azure catalog within these agents – a significant move beyond its exclusive reliance on OpenAI’s models. The company also unveiled autonomous agents that can work independently, detecting events and orchestrating complex workflows with minimal human oversight.
To understand the implications of AI agents, it’s useful to clarify the distinctions between AI, generative AI, and AI agents and explore the opportunities and risks they present to our autonomy, relationships, and decision-making.
… AI Agents: These are specialized applications of AI designed to perform tasks or simulate interactions. AI agents can be categorized into:
Tool Agents…
Simulation Agents..
While generative AI creates outputs from prompts, AI agents use AI to act with intention, whether to assist (tool agents) or emulate (simulation agents). The latter’s ability to mirror human thought and action offers fascinating possibilities — and raises significant risks.
1. Direct job relevance One of the biggest draws of skill-based training is its direct relevance to employees’ daily roles. By focusing on teaching job-specific skills, this approach helps workers feel immediately empowered to apply what they learn, leading to a quick payoff for both the individual and the organization. Yet, while this tight focus is a major benefit, it’s important to consider some potential drawbacks that could arise from an overly narrow approach.
Be wary of:
Overly Narrow Focus: Highly specialized training might leave employees with little room to apply their skills to broader challenges, limiting versatility and growth potential.
Risk of Obsolescence: Skills can quickly become outdated, especially in fast-evolving industries. L&D leaders should aim for regular updates to maintain relevance.
Neglect of Soft Skills: While technical skills are crucial, ignoring soft skills like communication and problem-solving may lead to a lack of balanced competency.
2. Enhanced job performance…
3. Addresses skill gaps…
…and several more areas to consider
Content creation and updates: AI streamlines the creation of training materials by identifying resource gaps and generating tailored content, while also refreshing existing materials based on industry trends and employee feedback to maintain relevance.
Data-driven insights: Use AI tools to provide valuable analytics to inform course development and instructional strategies, helping learner designers identify effective practices and improve overall learning outcomes.
Efficiency: Automating repetitive tasks, such as learner assessments and administrative duties, enables L&D professionals to concentrate on developing impactful training programs and fostering learner engagement.
Concerns
Limited understanding of context: AI may struggle to understand the specific educational context or the unique needs of diverse learner populations, potentially hindering effectiveness.
Oversimplification of learning: AI may reduce complex educational concepts to simple metrics or algorithms, oversimplifying the learning process and neglecting deeper cognitive development.
Resistance to change: Learning leaders may face resistance from staff who are skeptical about integrating AI into their training practices.
Scenario-based learning immerses learners in realistic scenarios that mimic real-world challenges they might face in their roles. These learning experiences are highly relevant and relatable. SBL is active learning. Instead of passively consuming information, learners actively engage with the content by making decisions and solving problems within the scenario. This approach enhances critical thinking and decision-making skills.
SBL can be more effective when storytelling techniques create a narrative that guides learners through the scenario to maintain engagement and make the learning memorable. Learners receive immediate feedback on their decisions and learn from their mistakes. Reflection can deepen their understanding. Branching scenarios add simulated complex decision-making processes and show the outcome of various actions through interactive scenarios where learner choices lead to different outcomes.
The role of L&D leaders in AI digital literacy
For L&D leaders, developing AI digital literacy within an organization requires a well-structured curriculum and development plan that equips employees with the knowledge, skills, and ethical grounding needed to thrive in an AI-augmented workplace. This curriculum should encompass a range of competencies that enhance technical understanding and foster a mindset ready for innovation and responsible use of AI. Key areas to focus on include:
The debate comes as the number of students with disabilities is growing. Some 7.5 million students required special education services as of the 2022-23 school year, the latest federal data shows, or around 15% of students. That was up from 7.1 million or 14% of students in the 2018-19 school year, just before the pandemic hit.
It’s unclear if the rise is due to schools getting better at identifying students with disabilities or if more children have needs now. Many young children missed early intervention and early special education services during the pandemic, and many educators say they are seeing higher behavioral needs and wider academic gaps in their classrooms.
“Students are arriving in our classrooms with a high level of dysregulation, which is displayed through their fight, flight, or freeze responses,” Tiffany Anderson, the superintendent of Topeka, Kansas’ public schools, wrote in her statement. “Students are also displaying more physically aggressive behavior.”
This report examines the evolving landscape of credentialing and learner records within global education systems, highlighting a shift from traditional time-based signals—such as courses and grades—to competency-based signals (credentials and learner records).
In my 15+ years of teaching, I have had students with autism spectrum disorder, ADHD, dyslexia, and a range of learning disabilities. I have grown in my understanding of inclusive teaching practices and I strive to incorporate universal design principles in my teaching.
From my classroom experience, I know that retrieval practice improves learning for all of my students, including those who are neurodiverse. But what have researchers found about retrieval practice with neurodiverse learners?
Learning Management In The AI Future
While LMS platforms like Canvas have positively impacted education, they’ve rarely lived up to their potential for personalized learning. With the advent of artificial intelligence (AI), this is set to change in revolutionary ways.
The promise of AI lies in its ability to automate repetitive tasks associated with student assessment and management, freeing educators to focus on education. More significantly, AI has the potential to go beyond the narrow focus on the end products of learning (like assignments) to capture insights into the learning process itself. This means analyzing the entire transcript of activities within the LMS, providing a dynamic, data-driven view of student progress rather than just seeing signposts of where students have been and what they have taken away.
…
Things become more potent by moving away from a particular student’s traversal of a specific course to looking at large aggregations of students traversing similar courses. This is why Instructure’s acquisition of Parchment, a company specializing in credential and transcript management, is so significant.
Sharpen your students’ interview skills — from timeshighereducation.com by Lewis Humphreys (though higher education-related, this is still solid information for those in K12) The employees of the future will need to showcase their skills in job interviews. Make sure they’re prepared for each setting, writes Lewis Humphreys
In today’s ultra-competitive job market, strong interview skills are paramount for students taking their first steps into the professional world. University careers services play a crucial role in equipping our students with the tools and confidence needed to excel in a range of interview settings. From pre-recorded video interviews to live online sessions and traditional face-to-face meetings, students must be adaptable and well-prepared. Here, I’ll explore ways universities can teach interview skills to students and graduates, helping them to present themselves and their skills in the best light possible.
LinkedIn, the social platform used by professionals to connect with others in their field, hunt for jobs, and develop skills, is taking the wraps off its latest effort to build artificial intelligence tools for users. Hiring Assistant is a new product designed to take on a wide array of recruitment tasks, from ingesting scrappy notes and thoughts to turn into longer job descriptions to sourcing candidates and engaging with them.
LinkedIn is describing Hiring Assistant as a milestone in its AI trajectory: It is, per the Microsoft-owned company, its first “AI agent” and one that happens to be targeting one of LinkedIn’s most lucrative categories of users — recruiters.
In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.
What did they find ?
Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.
Why does it matter?
This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.
From DSC: Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.
From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?” I can’t say I didn’t feel the same way.
We’re also introducing a groundbreaking new capability in public beta: computer use.Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.
Per The Rundown AI:
The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.
… Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.
Also related/see:
What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
New Claude, Who Dis? — from theneurondaily.com Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.
Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.
Also related/see:
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.
The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.
After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May
ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.
Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.
Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.
Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!
Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.
First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.
10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.
Emerging Multi-Modal AI Video Creation Platforms The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.
Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.
AI’s Trillion-Dollar Opportunity — from bain.com by David Crawford, Jue Wang, and Roy Singh The market for AI products and services could reach between $780 billion and $990 billion by 2027.
At a Glance
The big cloud providers are the largest concentration of R&D, talent, and innovation today, pushing the boundaries of large models and advanced infrastructure.
Innovation with smaller models (open-source and proprietary), edge infrastructure, and commercial software is reaching enterprises, sovereigns, and research institutions.
Commercial software vendors are rapidly expanding their feature sets to provide the best use cases and leverage their data assets.
Accelerated market growth. Nvidia’s CEO, Jensen Huang, summed up the potential in the company’s Q3 2024 earnings call: “Generative AI is the largest TAM [total addressable market] expansion of software and hardware that we’ve seen in several decades.”
And on a somewhat related note (i.e., emerging technologies), also see the following two postings:
Surgical Robots: Current Uses and Future Expectations — from medicalfuturist.com by Pranavsingh Dhunnoo As the term implies, a surgical robot is an assistive tool for performing surgical procedures. Such manoeuvres, also called robotic surgeries or robot-assisted surgery, usually involve a human surgeon controlling mechanical arms from a control centre.
Key Takeaways
Robots’ potentials have been a fascination for humans and have even led to a booming field of robot-assisted surgery.
Surgical robots assist surgeons in performing accurate, minimally invasive procedures that are beneficial for patients’ recovery.
The assistance of robots extend beyond incisions and includes laparoscopies, radiosurgeries and, in the future, a combination of artificial intelligence technologies to assist surgeons in their craft.
“Working with the team from Proto to bring to life, what several years ago would have seemed impossible, is now going to allow West Cancer Center & Research Institute to pioneer options for patients to get highly specialized care without having to travel to large metro areas,” said West Cancer’s CEO, Mitch Graves.
Obviously this workflow works just as well for meetings as it does for lectures. Stay present in the meeting with no screens and just write down the key points with pen and paper. Then let NotebookLM assemble the detailed summary based on your high-level notes. https://t.co/fZMG7LgsWG
In a matter of months, organizations have gone from AI helping answer questions, to AI making predictions, to generative AI agents. What makes AI agents unique is that they can take actions to achieve specific goals, whether that’s guiding a shopper to the perfect pair of shoes, helping an employee looking for the right health benefits, or supporting nursing staff with smoother patient hand-offs during shifts changes.
In our work with customers, we keep hearing that their teams are increasingly focused on improving productivity, automating processes, and modernizing the customer experience. These aims are now being achieved through the AI agents they’re developing in six key areas: customer service; employee empowerment; code creation; data analysis; cybersecurity; and creative ideation and production.
…
Here’s a snapshot of how 185 of these industry leaders are putting AI to use today, creating real-world use cases that will transform tomorrow.
AI Video Tools You Can Use Today— from heatherbcooper.substack.com by Heather Cooper The latest AI video models that deliver results
AI video models are improving so quickly, I can barely keep up! I wrote about unreleased Adobe Firefly Video in the last issue, and we are no closer to public access to Sora.
No worries – we do have plenty of generative AI video tools we can use right now.
Kling AI launched its updated v1.5 and the quality of image or text to video is impressive.
Hailuo MiniMax text to video remains free to use for now, and it produces natural and photorealistic results (with watermarks).
Runway added the option to upload portrait aspect ratio images to generate vertical videos in Gen-3 Alpha & Turbo modes.
…plus several more
Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week.
While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents.
AI can help educators focus more on human interaction and critical thinking by automating tasks that consume time but don’t require human empathy or creativity.
Encouraging students to use AI as a tool for learning and creativity can significantly boost their engagement and self-confidence, as seen in examples from student experiences shared in the discussion.
The speakers discuss various aspects of AI, including its potential to augment human intelligence and the need to focus on uniquely human competencies in the face of technological advancements. They also emphasize the significance of student agency, with examples of student-led initiatives and feedback sessions that reveal how young learners are already engaging with AI in innovative ways. The episode underscores the necessity for educators and administrators to stay informed and actively participate in the ongoing dialogue about AI to ensure its effective and equitable implementation in schools.
AI can be a powerful tool to break down language, interest, and accessibility barriers in the classroom, making learning more inclusive and engaging.
Incorporating AI tools in educational settings can help build essential skills that AI can’t replace, such as creativity and problem-solving, preparing students for future job markets.
When A.I.’s Output Is a Threat to A.I. Itself — from nytimes.com by Aatish Bhatia As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.
All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.
In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.
This weekend, the @xAI team brought our Colossus 100k H100 training cluster online. From start to finish, it was done in 122 days.
Colossus is the most powerful AI training system in the world. Moreover, it will double in size to 200k (50k H200s) in a few months.
The Rundown: Elon Musk’s xAI just launched “Colossus“, the world’s most powerful AI cluster powered by a whopping 100,000 Nvidia H100 GPUs, which was built in just 122 days and is planned to double in size soon.
… Why it matters: xAI’s Grok 2 recently caught up to OpenAI’s GPT-4 in record time, and was trained on only around 15,000 GPUs. With now more than six times that amount in production, the xAI team and future versions of Grok are going to put a significant amount of pressure on OpenAI, Google, and others to deliver.
Google Meet’s automatic AI note-taking is here — from theverge.com by Joanna Nelius Starting [on 8/28/24], some Google Workspace customers can have Google Meet be their personal note-taker.
Google Meet’s newest AI-powered feature, “take notes for me,” has started rolling out today to Google Workspace customers with the Gemini Enterprise, Gemini Education Premium, or AI Meetings & Messaging add-ons. It’s similar to Meet’s transcription tool, only instead of automatically transcribing what everyone says, it summarizes what everyone talked about. Google first announced this feature at its 2023 Cloud Next conference.
The World’s Call Center Capital Is Gripped by AI Fever — and Fear — from bloomberg.com by Saritha Rai [behind a paywall] The experiences of staff in the Philippines’ outsourcing industry are a preview of the challenges and choices coming soon to white-collar workers around the globe.
[On 8/27/24], we’re making Artifacts available for all Claude.ai users across our Free, Pro, and Team plans. And now, you can create and view Artifacts on our iOS and Android apps.
Artifacts turn conversations with Claude into a more creative and collaborative experience. With Artifacts, you have a dedicated window to instantly see, iterate, and build on the work you create with Claude. Since launching as a feature preview in June, users have created tens of millions of Artifacts.
What is the AI Risk Repository? The AI Risk Repository has three parts:
The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur.
The Domain Taxonomy of AIRisks classifies these risks into seven domains (e.g., “Misinformation”) and 23 subdomains (e.g., “False or misleading information”).
SACRAMENTO, Calif. — California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.
Per Oncely:
The Details:
Combatting Deepfakes: New laws to restrict election-related deepfakes and deepfake pornography, especially of minors, requiring social media to remove such content promptly.
Setting Safety Guardrails: California is poised to set comprehensive safety standards for AI, including transparency in AI model training and pre-emptive safety protocols.
Protecting Workers: Legislation to prevent the replacement of workers, like voice actors and call center employees, with AI technologies.
Over the coming days, start creating and chatting with Gems: customizable versions of Gemini that act as topic experts. ?
We’re also launching premade Gems for different scenarios – including Learning coach to break down complex topics and Coding partner to level up your skills… pic.twitter.com/2Dk8NxtTCE
We have new features rolling out, [that started on 8/28/24], that we previewed at Google I/O. Gems, a new feature that lets you customize Gemini to create your own personal AI experts on any topic you want, are now available for Gemini Advanced, Business and Enterprise users. And our new image generation model, Imagen 3, will be rolling out across Gemini, Gemini Advanced, Business and Enterprise in the coming days.
Major AI players caught heat in August over big bills and weak returns on AI investments, but it would be premature to think AI has failed to deliver. The real question is what’s next, and if industry buzz and pop-sci pontification hold any clues, the answer isn’t “more chatbots”, it’s agentic AI.
Agentic AI transforms the user experience from application-oriented information synthesis to goal-oriented problem solving. It’s what people have always thought AI would do—and while it’s not here yet, its horizon is getting closer every day.
In this issue of AI Pulse, we take a deep dive into agentic AI, what’s required to make it a reality, and how to prevent ‘self-thinking’ AI agents from potentially going rogue.
…
Citing AWS guidance, ZDNET counts six different potential types of AI agents:
Simple reflex agents for tasks like resetting passwords
Model-based reflex agents for pro vs. con decision making
Goal-/rule-based agents that compare options and select the most efficient pathways
Utility-based agents that compare for value
Learning agents
Hierarchical agents that manage and assign subtasks to other agents
Thanks to rapid advances in technology, the entire scenario within the legal landscape is changing fast. Fast forward to 2024, and legal tech integration would be the lifeblood of any law firm or legal department if it wishes to stay within the competitive fray.
Innovations such as AI-driven tools for research to blockchain-enabled contracts are thus not only guideline highlights of legal work today. Understanding and embracing these trends will be vital to surviving and thriving in law as the revolution gains momentum and the sands of the world of legal practice continue to shift.
Below are the eight expected trends in legal tech defining the future legal practice.
While we’re not delving deep here into how generative artificial intelligence (GenAI) and large language models (LLMs) work, we will talk generally about different categories of tech and emerging GenAI functionalities that are specific for legal.
Supio, a Seattle startup founded in 2021 by longtime friends and former Microsoft engineers, raised a $25 million Series A investment to supercharge its software platform designed to help lawyers quickly sort, search, and organize case-related data.
…
Supio focuses on cases related to personal injury and mass tort plaintiff law (when many plaintiffs file a claim). It specializes in organizing unstructured data and letting lawyers use a chatbot to pull relevant information.
“Most lawyers are data-rich and time-starved, but Supio automates time-sapping manual processes and empowers them to identify critical information to prove and expedite their cases,” Supio CEO and co-founder Jerry Zhou said in a statement.
NASHVILLE — As the world approaches the two-year mark since the original introduction of OpenAI’s ChatGPT, law firms already have made in-roads into establishing generative artificial intelligence (GenAI) as a part of their firms. Whether for document and correspondence drafting, summarization of meetings and contracts, legal research, or for back-office capabilities, firms have been playing around with a number of use cases to see where the technology may fit into the future.
Thomson Reuters announced (on August 21) it has made the somewhat unusual acquisition of UK pre-revenue startup Safe Sign Technologies (SST), which is developing legal-specific large language models (LLMs) and as of just eight months ago was operating in stealth mode.
…
There isn’t an awful lot of public information available about the company but speaking to Legal IT Insider about the acquisition, Hron explained that SST is focused in part on deep learning research as it pertains to training large language models and specifically legal large language models. The company as yet has no customers and has been focusing exclusively on developing the technology and the models.
Legal work is incredibly labor- and time-intensive, requiring piecing together cases from vast amounts of evidence. That’s driving some firms to pilot AI to streamline certain steps; according to a 2023 survey by the American Bar Association, 35% of law firms now use AI tools in their practice.
OpenAI-backed Harvey is among the big winners so far in the burgeoning AI legal tech space, alongside startups such as Leya and Klarity. But there’s room for one more, says Jerry Zhou and Kyle Lam, the co-founders of an AI platform for personal injury law called Supio, which emerged from stealth Tuesday with a $25 million investment led by Sapphire Ventures.
Supio uses generative AI to automate bulk data collection and aggregation for legal teams. In addition to summarizing info, the platform can organize and identify files — and snippets within files — that might be useful in outlining, drafting and presenting a case, Zhou said.
An internet search for free learning resources will likely return a long list that includes some useful sites amid a sea of not-really-free and not-very-useful sites.
To help teachers more easily find the best free and freemium sites they can use in their classrooms and curricula, I’ve curated a list that describes the top free/freemium sites for learning.
In some cases, Tech & Learning has reviewed the site in detail, and those links are included so readers can find out more about how to make the best use of the online materials. In all cases, the websites below provide valuable educational tools, lessons, and ideas, and are worth exploring further.
How to Kill Student Curiosity in 5 Steps (and What to Do Instead) — from edweek.org by Olivia Odileke The unintentional missteps teachers and administrators are making
I’ve observed five major ways we’re unintentionally stifling curiosity and issue a call to action for educators, administrators, and policymakers to join the curiosity revolution:
From DSC: Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.