What Students Are Saying About Teachers Using A.I. to Grade — from nytimes.com by ; via Claire Zau
Teenagers and educators weigh in on a recent question from The Ethicist.

Is it unethical for teachers to use artificial intelligence to grade papers if they have forbidden their students from using it for their assignments?

That was the question a teacher asked Kwame Anthony Appiah in a recent edition of The Ethicist. We posed it to students to get their take on the debate, and asked them their thoughts on teachers using A.I. in general.

While our Student Opinion questions are usually reserved for teenagers, we also heard from a few educators about how they are — or aren’t — using A.I. in the classroom. We’ve included some of their answers, as well.


OpenAI wants to pair online courses with chatbots — from techcrunch.com by Kyle Wiggers; via James DeVaney on LinkedIn

If OpenAI has its way, the next online course you take might have a chatbot component.

Speaking at a fireside on Monday hosted by Coeus Collective, Siya Raj Purohit, a member of OpenAI’s go-to-market team for education, said that OpenAI might explore ways to let e-learning instructors create custom “GPTs” that tie into online curriculums.

“What I’m hoping is going to happen is that professors are going to create custom GPTs for the public and let people engage with content in a lifelong manner,” Purohit said. “It’s not part of the current work that we’re doing, but it’s definitely on the roadmap.”


Learning About Google Learn About: What Educators Need To Know — from techlearning.com by Ray Bendici
Google’s experimental Learn About platform is designed to create an AI-guided learning experience

Google Learn About is a new experimental AI-driven platform available that provides digestible and in-depth knowledge about various topics, but showcases it all in an educational context. Described by Google as a “conversational learning companion,” it is essentially a Wikipedia-style chatbot/search engine, and then some.

In addition to having a variety of already-created topics and leading questions (in areas such as history, arts, culture, biology, and physics) the tool allows you to enter prompts using either text or an image. It then provides a general overview/answer, and then suggests additional questions, topics, and more to explore in regard to the initial subject.

The idea is for student use is that the AI can help guide a deeper learning process rather than just provide static answers.


What OpenAI’s PD for Teachers Does—and Doesn’t—Do — from edweek.org by Olina Banerji
What’s the first thing that teachers dipping their toes into generative artificial intelligence should do?

They should start with the basics, according to OpenAI, the creator of ChatGPT and one of the world’s most prominent artificial intelligence research companies. Last month, the company launched an hour-long, self-paced online course for K-12 teachers about the definition, use, and harms of generative AI in the classroom. It was launched in collaboration with Common Sense Media, a national nonprofit that rates and reviews a wide range of digital content for its age appropriateness.

…the above article links to:

ChatGPT Foundations for K–12 Educators — from commonsense.org

This course introduces you to the basics of artificial intelligence, generative AI, ChatGPT, and how to use ChatGPT safely and effectively. From decoding the jargon to responsible use, this course will help you level up your understanding of AI and ChatGPT so that you can use tools like this safely and with a clear purpose.

Learning outcomes:

  • Understand what ChatGPT is and how it works.
  • Demonstrate ways to use ChatGPT to support your teaching practices.
  • Implement best practices for applying responsible AI principles in a school setting.

Takeaways From Google’s Learning in the AI Era Event — from edtechinsiders.substack.com by Sarah Morin, Alex Sarlin, and Ben Kornell
Highlights from Our Day at Google + Behind-the-Scenes Interviews Coming Soon!

  1. NotebookLM: The Start of an AI Operating System
  2. Google is Serious About AI and Learning
  3. Google’s LearnLM Now Available in AI Studio
  4. Collaboration is King
  5. If You Give a Teacher a Ferrari

Rapid Responses to AI — from the-job.beehiiv.com by Paul Fain
Top experts call for better data and more short-term training as tech transforms jobs.

AI could displace middle-skill workers and widen the wealth gap, says landmark study, which calls for better data and more investment in continuing education to help workers make career pivots.

Ensuring That AI Helps Workers
Artificial intelligence has emerged as a general purpose technology with sweeping implications for the workforce and education. While it’s impossible to precisely predict the scope and timing of looming changes to the labor market, the U.S. should build its capacity to rapidly detect and respond to AI developments.
That’s the big-ticket framing of a broad new report from the National Academies of Sciences, Engineering, and Medicine. Congress requested the study, tapping an all-star committee of experts to assess the current and future impact of AI on the workforce.

“In contemplating what the future holds, one must approach predictions with humility,” the study says…

“AI could accelerate occupational polarization,” the committee said, “by automating more nonroutine tasks and increasing the demand for elite expertise while displacing middle-skill workers.”

The Kicker: “The education and workforce ecosystem has a responsibility to be intentional with how we value humans in an AI-powered world and design jobs and systems around that,” says Hsieh.


Why We Undervalue Ideas and Overvalue Writing — from aiczar.blogspot.com by Alexander “Sasha” Sidorkin

A student submits a paper that fails to impress stylistically yet approaches a worn topic from an angle no one has tried before. The grade lands at B minus, and the student learns to be less original next time. This pattern reveals a deep bias in higher education: ideas lose to writing every time.

This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas – shaped by unique life experiences and cultural viewpoints – get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.


Google Scholar’s New AI Outline Tool Explained By Its Founder — from techlearning.com by Erik Ofgang
Google Scholar PDF reader uses Gemini AI to read research papers. The AI model creates direct links to the paper’s citations and a digital outline that summarizes the different sections of the paper.

Google Scholar has entered the AI revolution. Google Scholar PDF reader now utilizes generative AI powered by Google’s Gemini AI tool to create interactive outlines of research papers and provide direct links to sources within the paper. This is designed to make reading the relevant parts of the research paper more efficient, says Anurag Acharya, who co-founded Google Scholar on November 18, 2004, twenty years ago last month.


The Four Most Powerful AI Use Cases in Instructional Design Right Now — from drphilippahardman.substack.com by Dr. Philippa Hardman
Insights from ~300 instructional designers who have taken my AI & Learning Design bootcamp this year

  1. AI-Powered Analysis: Creating Detailed Learner Personas…
  2. AI-Powered Design: Optimising Instructional Strategies…
  3. AI-Powered Development & Implementation: Quality Assurance…
  4. AI-Powered Evaluation: Predictive Impact Assessment…

How Are New AI Tools Changing ‘Learning Analytics’? — from edsurge.com by Jeffrey R. Young
For a field that has been working to learn from the data trails students leave in online systems, generative AI brings new promises — and new challenges.

In other words, with just a few simple instructions to ChatGPT, the chatbot can classify vast amounts of student work and turn it into numbers that educators can quickly analyze.

Findings from learning analytics research is also being used to help train new generative AI-powered tutoring systems.

Another big application is in assessment, says Pardos, the Berkeley professor. Specifically, new AI tools can be used to improve how educators measure and grade a student’s progress through course materials. The hope is that new AI tools will allow for replacing many multiple-choice exercises in online textbooks with fill-in-the-blank or essay questions.

 

The Many Special Populations Microschools Serve — from microschoolingcenter.org. by Don Soifer

Kids representing a broad range of special populations have a strong presence in today’s microschooling movement. Children with neurodiversities, other special needs, and those coming to microschools at two or more grades below “grade level mastery” as defined by their state all are served by more than 50 percent of microschools surveyed nationally, according to the Center’s 2024 American Microschools Sector Analysis report.

Children who have experienced emotional trauma or have experienced housing or food insecurity are also being served widely in microschools, according to leaders surveyed nationally.

This won’t come as a surprise to most in the microschooling movement. But to those who are less familiar, understanding the many ways that microschooling is about thriving for families and children who have struggled in their prior schooling settings.
.

The many special populations that microschools serve

 

Focus on School-to-Home Resources for the Holidays

Focus on School-to-Home Resources for the Holidays — from classtechtips.com by Monica Burns

As the holiday season approaches, families might reach out to educators looking for resources to support learning over school break. By leveraging high-quality, vetted resources, you can ensure that the school-to-home connection stays strong, even during winter break. Today on the blog, I’m excited to share some resources for the holidays from the team at Ask, Listen, Learn.


Also see:

Tech-Savvy Approaches for a Differentiated Classroom with Dr. Clare Kilbane and Dr. Natalie Milman – Easy EdTech Podcast 296

Tech-Savvy Approaches for a Differentiated Classroom with Dr. Clare Kilbane and Dr. Natalie Milman – Easy EdTech Podcast 296

In this episode, educational leaders and fellow ASCD authors Dr. Clare Kilbane and Dr. Natalie Milman share expert strategies for using EdTech to personalize learning. Explore insights from their book, Using Technology in a Differentiated Classroom, and discover practical tips to thoughtfully integrate digital tools and support diverse learners. If you’re ready to elevate all students’ learning journeys, this episode is a must-listen!


Also see:

Lesson planning resource -- short, engaging videos for students

Lesson planning resource — short, engaging videos for students
This post is sponsored by ClickView. All opinions are my own.

Where do you go to find engaging, high-quality content for your lesson plans? Searching for short videos for students might feel like a time consuming task. As a classroom teacher there were plenty of times when I knew watching a video clip would help students better understand a concept, but I couldn’t always find the right videos to share with them. ClickView is a platform with video resources curated just for K-12 educators.

Today on the blog we’ll take a look at ClickView, a video platform for K-12 schools designed to make lesson planning easier. Whether you’re introducing a new topic or diving deeper into a complex unit, ClickView’s range of videos and innovative features can transform the way you teach.

 

VLOG: Learning in Medical School — from learningscientists.org by The Learning Scientists

NOTE:
  • This vlog is for anyone in medical school, interested in medical school, or just curious about what learning is like in medical school!

In this vlog Althea and Cindy talk about their work with medical student learners. They discuss common learning challenges in medical school, efficient learning strategies, learning in the context of attentional disorders and anxiety, and what it means to prepare future healers.

 

20 Years of Law School Insights: Diversity, Debt, and Student Satisfaction Trends — from jdjournal.com by Maria Lenin Laus (may ask for email address and first name to access this)

Over the past two decades, law school students have consistently rated their educational experiences highly. According to the Law School Survey of Student Engagement (LSSSE), nearly 80% of law students have described their experience as “good” or “excellent” since the survey’s inception in 2004. However, significant disparities in satisfaction persist, particularly among students of color.

The LSSSE, conducted by Indiana University’s Center for Postsecondary Research, recently released its 20th-anniversary report. The report reflects on changes in student diversity, debt, career aspirations, and overall satisfaction and paints a nuanced picture of the evolving law school experience and the remaining challenges.

Also see:


Law student satisfaction rates high for last 20 years, lower for students of color, study shows — from reuters.com by Karen Sloan

Nov 26 (Reuters) – Aspiring lawyers have consistently rated their experience at law schools well over the past two decades, with about 80% rating their experience as either “good” or “excellent” on the annual Law School Survey of Student Engagement, although satisfaction rates among students of color were lower than among whites over the past 20 years.

The survey, opens new tab, part of Indiana University’s Center for Postsecondary Research, marked its 20th edition this month with a look back at how student diversity, debt loads, career plans and satisfaction levels have changed – or not – during that time.


 

US College Closures Are Expected to Soar, Fed Research Says — from bloomberg.com

  • Fed research created predictive model of college stress
  • Worst-case scenario forecasts 80 additional closures

The number of colleges that close each year is poised to significantly increase as schools contend with a slowdown in prospective students.

That’s the finding of a new working paper published by the Federal Reserve Bank of Philadelphia, where researchers created predictive models of schools’ financial distress using metrics like enrollment and staffing patterns, sources of revenue and liquidity data. They overlayed those models with simulations to estimate the likely increase of future closures.

Excerpt from the working paper:

We document a high degree of missing data among colleges that eventually close and show that this is a key impediment to identifying at risk institutions. We then show that modern machine learning techniques, combined with richer data, are far more effective at predicting college closures than linear probability models, and considerably more effective than existing accountability metrics. Our preferred model, which combines an off-the-shelf machine learning algorithm with the richest set of explanatory variables, can significantly improve predictive accuracy even for institutions with complete data, but is particularly helpful for predicting instances of financial distress for institutions with spotty data.


From DSC:
Questions that come to my mind here include:

  • Shouldn’t the public — especially those relevant parents and students — be made more aware of these types of papers and reports?
    .
  • How would any of us like finishing up 1-3 years of school and then being told that our colleges or universities were closing, effective immediately? (This has happened many times already.) and with the demographic cliff starting to hit higher education, this will happen even more now.
    .
    Adding insult to injury…when we transfer to different institutions, we’re told that many of our prior credits don’t transfer — thus adding a significant amount to the overall cost of obtaining our degrees.
    .
  • Would we not be absolutely furious to discover such communications from our prior — and new — colleges and universities?
    .
  • Will all of these types of closures move more people to this vision here?

Relevant excerpts from Ray Schroeder’s recent articles out at insidehighered.com:

Winds of Change in Higher Ed to Become a Hurricane in 2025

A number of factors are converging to create a huge storm. Generative AI advances, massive federal policy shifts, broad societal and economic changes, and the demographic cliff combine to create uncertainty today and change tomorrow.

Higher Education in 2025: AGI Agents to Displace People

The anticipated enrollment cliff, reductions in federal and state funding, increased inflation, and dwindling public support for tuition increases will combine to put even greater pressure on university budgets.


On the positive side of things, the completion rates have been getting better:

National college completion rate ticks up to 61.1% — from highereddive.com by Natalie Schwartz
Those who started at two-year public colleges helped drive the overall increase in students completing a credential.

Dive Brief:

  • Completion rates ticked up to 61.1% for students who entered college in fall 2018, a 0.5 percentage-point increase compared to the previous cohort, according to data released Wednesday by the National Student Clearinghouse Research Center.
  • The increase marks the highest six-year completion rate since 2007 when the clearinghouse began tracking the data. The growth was driven by fewer students stopping out of college, as well as completion gains among students who started at public two-year colleges.
  • “Higher completion rates are welcome news for colleges and universities still struggling to regain enrollment levels from before the pandemic,” Doug Shapiro, the research center’s executive director, said in a statement dated Wednesday.

Addendum:

Attention Please: Professors Struggle With Student Disengagement — from edsurge.com

The stakes are huge, because the concern is that maybe the social contract between students and professors is kind of breaking down. Do students believe that all this college lecturing is worth hearing? Or, will this moment force a change in the way college teaching is done?

 

From DSC:
I opened up a BRAND NEW box of cereal from Post the other day. As I looked down into the package, I realized that it was roughly half full. (This has happened many times before, but it struck me so much this time that I had to take pictures of it and post this item.)
.

 

.
Looks can be deceiving for sure. It looks like I should have been getting a full box of cereal…but no…only about half of the package was full. It’s another example of the shrinkflation of things — which can also be described as people deceptively ripping other people off. 

“As long as I’m earning $$, I don’t care how it impacts others.” <– That’s not me talking, but it’s increasingly the perspective that many Americans have these days. We don’t bother with ethics and morals…how old-fashioned can you get, right? We just want to make as much money as possible and to hell with how our actions/products are impacting others.

Another example from the food industry is one of the companies that I worked for in the 1990’s — Kraft Foods. Kraft has not served peoples’ health well at all. Even when they tried to take noble steps to provide healthier foods, other food executives/companies in the industry wouldn’t hop on board. They just wanted to please Wall Street, not Main Street. So companies like Kraft have contributed to the current situations that we face which involve obesity, diabetes, heart attacks, and other ailments. (Not to mention increased health care costs.) 

The gambling industry doesn’t give a rip about people either. Look out for the consequences.

And the cannabis industry joins the gambling industry...and they’re often right on the doorsteps of universities and colleges.

Bottom line reflection:
There are REAL ramifications when we don’t take Christ’s words/commands to love one another seriously (or even to care about someone at all). We’re experiencing such ramifications EVERY DAY now.

 

Closing the digital use divide with active and engaging learning — from eschoolnews.com by Laura Ascione
Students offered insight into how to use active learning, with digital tools, to boost their engagement

When it comes to classroom edtech use, digital tools have a drastically different impact when they are used actively instead of passively–a critical difference examined in the 2023-2024 Speak Up Research by Project Tomorrow.

Students also outlined their ideal active learning technologies:

  • Collaboration tools to support projects
  • Student-teacher communication tools
  • Online databases for self-directed research
  • Multi-media tools for creating new content
  • Online and digital games
  • AI tools to support personalized learning
  • Coding and computer programming resources
  • Online animations, simulations, and virtual labs
  • Virtual reality equipment and content
 

(Excerpt from the 12/4/24 edition)

Robot “Jailbreaks”
In the year or so since large language models hit the big time, researchers have demonstrated numerous ways of tricking them into producing problematic outputs including hateful jokes, malicious code, phishing emails, and the personal information of users. It turns out that misbehavior can take place in the physical world, too: LLM-powered robots can easily be hacked so that they behave in potentially dangerous ways.

Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge, get a wheeled robot to find the best place to detonate a bomb, and force a four-legged robot to spy on people and enter restricted areas.

“We view our attack not just as an attack on robots,” says George Pappas, head of a research lab at the University of Pennsylvania who helped unleash the rebellious robots. “Any time you connect LLMs and foundation models to the physical world, you actually can convert harmful text into harmful actions.”

The robot “jailbreaks” highlight a broader risk that is likely to grow as AI models become increasingly used as a way for humans to interact with physical systems, or to enable AI agents autonomously on computers, say the researchers involved.


Virtual lab powered by ‘AI scientists’ super-charges biomedical research — from nature.com by Helena Kudiabor
Could human-AI collaborations be the future of interdisciplinary studies?

In an effort to automate scientific discovery using artificial intelligence (AI), researchers have created a virtual laboratory that combines several ‘AI scientists’ — large language models with defined scientific roles — that can collaborate to achieve goals set by human researchers.

The system, described in a preprint posted on bioRxiv last month1, was able to design antibody fragments called nanobodies that can bind to the virus that causes COVID-19, proposing nearly 100 of these structures in a fraction of the time it would take an all-human research group.


Can AI agents accelerate AI implementation for CIOs? — from intelligentcio.com by Arun Shankar

By embracing an agent-first approach, every CIO can redefine their business operations. AI agents are now the number one choice for CIOs as they come pre-built and can generate responses that are consistent with a company’s brand using trusted business data, explains Thierry Nicault at Salesforce Middle.


AI Turns Photos Into 3D Real World — from theaivalley.com by Barsee

Here’s what you need to know:

  • The system generates full 3D environments that expand beyond what’s visible in the original image, allowing users to explore new perspectives.
  • Users can freely navigate and view the generated space with standard keyboard and mouse controls, similar to browsing a website.
  • It includes real-time camera effects like depth-of-field and dolly zoom, as well as interactive lighting and animation sliders to tweak scenes.
  • The system works with both photos and AI-generated images, enabling creators to integrate it with text-to-image tools or even famous works of art.

Why it matters:
This technology opens up exciting possibilities for industries like gaming, film, and virtual experiences. Soon, creating fully immersive worlds could be as simple as generating a static image.

Also related, see:

From World Labs

Today we’re sharing our first step towards spatial intelligence: an AI system that generates 3D worlds from a single image. This lets you step into any image and explore it in 3D.

Most GenAI tools make 2D content like images or videos. Generating in 3D instead improves control and consistency. This will change how we make movies, games, simulators, and other digital manifestations of our physical world.

In this post you’ll explore our generated worlds, rendered live in your browser. You’ll also experience different camera effects, 3D effects, and dive into classic paintings. Finally, you’ll see how creators are already building with our models.


Addendum on 12/5/24:

 

AI Tutors: Hype or Hope for Education? — from educationnext.org by John Bailey and John Warner
In a new book, Sal Khan touts the potential of artificial intelligence to address lagging student achievement. Our authors weigh in.

In Salman Khan’s new book, Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing) (Viking, 2024), the Khan Academy founder predicts that AI will transform education by providing every student with a virtual personalized tutor at an affordable cost. Is Khan right? Is radically improved achievement for all students within reach at last? If so, what sorts of changes should we expect to see, and when? If not, what will hold back the AI revolution that Khan foresees? John Bailey, a visiting fellow at the American Enterprise Institute, endorses Khan’s vision and explains the profound impact that AI technology is already making in education. John Warner, a columnist for the Chicago Tribune and former editor for McSweeney’s Internet Tendency, makes the case that all the hype about AI tutoring is, as Macbeth quips, full of sound and fury, signifying nothing.

 

How to Secure Your 2025 Legal Tech — from americanbar.org by Rachel Bailey

Summary

  • With firms increasingly open to AI tools, now is an exciting time to do some blue-sky thinking about your firm’s technology as a whole.
  • This is a chance for teams to envision the future of their firm’s technology landscape and make bold choices that align with long-term goals.
  • Learn six tips that will improve your odds of approval for your legal tech budget.

Also relevant, see:


Why Technology-Driven Law Firms Are Poised For Long-Term Success — from forbes.com by Daniel Farrar

Client expectations have shifted significantly in today’s technology-driven world. Quick communication and greater transparency are now a priority for clients throughout the entire case life cycle. This growing demand for tech-enhanced processes comes not only from clients but also from staff, and is set to rise even further as more advances become available.

I see the shift to cloud-based digital systems, especially for small and midsized law firms, as evening the playing field by providing access to robust tools that can aid legal services. Here are some examples of how legal professionals are leveraging tech every day…


Just 10% of law firms have a GenAI policy, new Thomson Reuters report shows — from legaltechnology.com by Caroline Hill

Just 10% of law firms and 21% of corporate legal teams have now implemented policies to guide their organisation’s use of generative AI, according to a report out today (2 December) from Thomson Reuters.


AI & Law Symposium: Students Exploring Innovation, Challenges, and Legal Implications of a Technological Revolution — from allard.ubc.ca

Artificial Intelligence (AI) has been rapidly deployed around the world in a growing number of sectors, offering unprecedented opportunities while raising profound legal and ethical questions. This symposium will explore the transformative power of AI, focusing on its benefits, limitations, and the legal challenges it poses.

AI’s ability to revolutionize sectors such as healthcare, law, and business holds immense potential, from improving efficiency and access to services, to providing new tools for analysis and decision-making. However, the deployment of AI also introduces significant risks, including bias, privacy concerns, and ethical dilemmas that challenge existing legal and regulatory frameworks. As AI technologies continue to evolve, it is crucial to assess their implications critically to ensure responsible and equitable development.


The role of legal teams in creating AI ethics guardrails — from legaldive.com by Catherine Dawson
For organizations to balance the benefits of artificial intelligence with its risk, it’s important for counsel to develop policy on data governance and privacy.


How Legal Aid and Tech Collaboration Can Bridge the Justice Gap — from law.com by Kelli Raker and Maya Markovich
“Technology, when thoughtfully developed and implemented, has the potential to expand access to legal services significantly,” write Kelli Raker and Maya Markovich.

Challenges and Concerns
Despite the potential benefits, legal aid organizations face several hurdles in working with new technologies:

1. Funding and incentives: Most funding for legal aid is tied to direct legal representation, leaving little room for investment in general case management or exploration of innovative service delivery methods to exponentially scale impact.

2. Jurisdictional inconsistency: The lack of a unified court system or standardized forms across regions makes it challenging to develop accurate and widely applicable tech solutions in certain types of matters.

3. Organizational capacity: Many legal aid organizations lack the time and resources to thoroughly evaluate new tech offerings or collaboration opportunities or identify internal workflows and areas of unmet need with the highest chance for impact.

4. Data privacy and security: Legal aid providers need assurance that tech protects client data and avoids misuse of sensitive information.

5. Ethical considerations: There’s significant concern about the accuracy of information produced by consumer-facing technology and the potential for inadvertent unauthorized practice of law.

 
 

Below is an excerpt from 2024: The State of Generative AI in the Enterprise — from Menlo Ventures

  • Legal: Historically resistant to tech, the legal industry ($350 million in enterprise AI spend) is now embracing generative AI to manage massive amounts of unstructured data and automate complex, pattern-based workflows. The field broadly divides into litigation and transactional law, with numerous subspecialties. Rooted in litigation, Everlaw* focuses on legal holds, e-discovery, and trial preparation, while Harvey and Spellbook are advancing AI in transactional law with solutions for contract review, legal research, and M&A. Specific practice areas are also targeted AI innovations: EvenUp focuses on injury law, Garden on patents and intellectual property, Manifest on immigration and employment law, while Eve* is re-inventing plaintiff casework from client intake to resolution.

Excerpt from Brainyacts #250 (from 11/22/24) — from the Leveraging Generative AI in Client Interviews section

Here’s what the article from Forbes said:

  • CodeSignal, an AI tech company, has launched Conversation Practice, an AI-driven platform to help learners practice critical workplace communication and soft skills.
  • Conversation Practice uses multiple AI models and a natural spoken interface to simulate real-world scenarios and provide feedback.
  • The goal is to address the challenge of developing conversational skills through iterative practice, without the awkwardness of peer role-play.

What I learned about this software changed my perception about how I can prepare in the future for client meetings. Here’s what I’ve taken away from the potential use of this software in a legal practice setting:


Why Technology-Driven Law Firms Are Poised For Long-Term Success — from forbes.com by Daniel Farrar

I see the shift to cloud-based digital systems, especially for small and midsized law firms, as evening the playing field by providing access to robust tools that can aid legal services. Here are some examples of how legal professionals are leveraging tech every day:

    • Cloud-based case management solutions. These help enhance productivity through collaboration tools and automated workflows while keeping data secure.
    • E-discovery tools. These tools manage vast amounts of data and help speed up litigation processes.
    • Artificial intelligence. AI has helped automate tasks for legal professionals including for case management, research, contract review and predictive analytics.
 

2024: The State of Generative AI in the Enterprise — from menlovc.com (Menlo Ventures)
The enterprise AI landscape is being rewritten in real time. As pilots give way to production, we surveyed 600 U.S. enterprise IT decision-makers to reveal the emerging winners and losers.

This spike in spending reflects a wave of organizational optimism; 72% of decision-makers anticipate broader adoption of generative AI tools in the near future. This confidence isn’t just speculative—generative AI tools are already deeply embedded in the daily work of professionals, from programmers to healthcare providers.

Despite this positive outlook and increasing investment, many decision-makers are still figuring out what will and won’t work for their businesses. More than a third of our survey respondents do not have a clear vision for how generative AI will be implemented across their organizations. This doesn’t mean they’re investing without direction; it simply underscores that we’re still in the early stages of a large-scale transformation. Enterprise leaders are just beginning to grasp the profound impact generative AI will have on their organizations.


Business spending on AI surged 500% this year to $13.8 billion, says Menlo Ventures — from cnbc.com by Hayden Field

Key Points

  • Business spending on generative AI surged 500% this year, hitting $13.8 billion — up from just $2.3 billion in 2023, according to data from Menlo Ventures released Wednesday.
  • OpenAI ceded market share in enterprise AI, declining from 50% to 34%, per the report.
  • Amazon-backed Anthropic doubled its market share from 12% to 24%.

Microsoft quietly assembles the largest AI agent ecosystem—and no one else is close — from venturebeat.com by Matt Marshall

Microsoft has quietly built the largest enterprise AI agent ecosystem, with over 100,000 organizations creating or editing AI agents through its Copilot Studio since launch – a milestone that positions the company ahead in one of enterprise tech’s most closely watched and exciting  segments.

The rapid adoption comes as Microsoft significantly expands its agent capabilities. At its Ignite conference [that started on 11/19/24], the company announced it will allow enterprises to use any of the 1,800 large language models (LLMs) in the Azure catalog within these agents – a significant move beyond its exclusive reliance on OpenAI’s models. The company also unveiled autonomous agents that can work independently, detecting events and orchestrating complex workflows with minimal human oversight.


Now Hear This: World’s Most Flexible Sound Machine Debuts — from
Using text and audio as inputs, a new generative AI model from NVIDIA can create any combination of music, voices and sounds.

Along these lines, also see:


AI Agents Versus Human Agency: 4 Ways To Navigate Our AI-Driven World — from forbes.com by Cornelia C. Walther

To understand the implications of AI agents, it’s useful to clarify the distinctions between AI, generative AI, and AI agents and explore the opportunities and risks they present to our autonomy, relationships, and decision-making.

AI Agents: These are specialized applications of AI designed to perform tasks or simulate interactions. AI agents can be categorized into:

    • Tool Agents…
    • Simulation Agents..

While generative AI creates outputs from prompts, AI agents use AI to act with intention, whether to assist (tool agents) or emulate (simulation agents). The latter’s ability to mirror human thought and action offers fascinating possibilities — and raises significant risks.

 

2024-11-22: The Race to the TopDario Amodei on AGI, Risks, and the Future of Anthropic — from emergentbehavior.co by Prakash (Ate-a-Pi)

Risks on the Horizon: ASL Levels
The two key risks Dario is concerned about are:

a) cyber, bio, radiological, nuclear (CBRN)
b) model autonomy

These risks are captured in Anthropic’s framework for understanding AI Safety Levels (ASL):

1. ASL-1: Narrow-task AI like Deep Blue (no autonomy, minimal risk).
2. ASL-2: Current systems like ChatGPT/Claude, which lack autonomy and don’t pose significant risks beyond information already accessible via search engines.
3. ASL-3: Agents arriving soon (potentially next year) that can meaningfully assist non-state actors in dangerous activities like cyber or CBRN (chemical, biological, radiological, nuclear) attacks. Security and filtering are critical at this stage to prevent misuse.
4. ASL-4: AI smart enough to evade detection, deceive testers, and assist state actors with dangerous projects. AI will be strong enough that you would want to use the model to do anything dangerous. Mechanistic interpretability becomes crucial for verifying AI behavior.
5. ASL-5: AGI surpassing human intelligence in all domains, posing unprecedented challenges.

Anthropic’s if/then framework ensures proactive responses: if a model demonstrates danger, the team clamps down hard, enforcing strict controls.



Should You Still Learn to Code in an A.I. World? — from nytimes.com by
Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.

Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent.
“I would say this is the worst environment for entry-level jobs in tech, period, that I’ve seen in 25 years,” said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.

For years, the career advice from everyone who mattered — the Apple chief executive Tim Cook, your mother — was “learn to code.” It felt like an immutable equation: Coding skills + hard work = job.

Now the math doesn’t look so simple.

Also see:

AI builds apps in 2 mins flat — where the Neuron mentions this excerpt about Lovable:

There’s a new coding startup in town, and it just MIGHT have everybody else shaking in their boots (we’ll qualify that in a sec, don’t worry).

It’s called Lovable, the “world’s first AI fullstack engineer.”

Lovable does all of that by itself. Tell it what you want to build in plain English, and it creates everything you need. Want users to be able to log in? One click. Need to store data? One click. Want to accept payments? You get the idea.

Early users are backing up these claims. One person even launched a startup that made Product Hunt’s top 10 using just Lovable.

As for us, we made a Wordle clone in 2 minutes with one prompt. Only edit needed? More words in the dictionary. It’s like, really easy y’all.


When to chat with AI (and when to let it work) — from aiwithallie.beehiiv.com by Allie K. Miller

Re: some ideas on how to use Notebook LM:

  • Turn your company’s annual report into an engaging podcast
  • Create an interactive FAQ for your product manual
  • Generate a timeline of your industry’s history from multiple sources
  • Produce a study guide for your online course content
  • Develop a Q&A system for your company’s knowledge base
  • Synthesize research papers into digestible summaries
  • Create an executive content briefing from multiple competitor blog posts
  • Generate a podcast discussing the key points of a long-form research paper

Introducing conversation practice: AI-powered simulations to build soft skills — from codesignal.com by Albert Sahakyan

From DSC:
I have to admit I’m a bit suspicious here, as the “conversation practice” product seems a bit too scripted at times, but I post it because the idea of using AI to practice soft skills development makes a great deal of sense:


 
© 2024 | Daniel Christian