How to Level Up Your Job Hunt With AI Using AI to find, evaluate, and apply for jobs. — from whytryai.com by Daniel Nest

AI is best seen as a sparring partner that helps you through all stages of the job hunt.

Here are the ones I’ll cover:

  1. Self-discovery: What are you good at and what are your values?
  2. Upskilling: What gaps exist in your skillset and how can you close them?
  3. Job search: What existing jobs fit your profile and expectations?
  4. Company research: What can you learn about a specific company before applying?
  5. Application process: How do you tailor your CV and cover letter to the job?
  6. Job interview prepHow do you prepare and practice for job interviews?
  7. Feedback analysis: What insights can you gain from any feedback from potential employers?
  8. Decision and negotiation: How do you evaluate job offers and negotiate the best terms?

Now let’s look at each phase in detail and see how AI can help.

 

Along these same lines, see:

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku

We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.


ZombAIs: From Prompt Injection to C2 with Claude Computer Use — from embracethered.com by Johann Rehberger

A few days ago, Anthropic released Claude Computer Use, which is a model + code that allows Claude to control a computer. It takes screenshots to make decisions, can run bash commands and so forth.

It’s cool, but obviously very dangerous because of prompt injection. Claude Computer Use enables AI to run commands on machines autonomously, posing severe risks if exploited via prompt injection.

This blog post demonstrates that it’s possible to leverage prompt injection to achieve, old school, command and control (C2) when giving novel AI systems access to computers.

We discussed one way to get malware onto a Claude Computer Use host via prompt injection. There are countless others, like another way is to have Claude write the malware from scratch and compile it. Yes, it can write C code, compile and run it. There are many other options.

TrustNoAI.

And again, remember do not run unauthorized code on systems that you do not own or are authorized to operate on.

Also relevant here, see:


Perplexity Grows, GPT Traffic Surges, Gamma Dominates AI Presentations – The AI for Work Top 100: October 2024 — from flexos.work by Daan van Rossum
Perplexity continues to gain users despite recent controversies. Five out of six GPTs see traffic boosts. This month’s highest gainers including Gamma, Blackbox, Runway, and more.


Growing Up: Navigating Generative AI’s Early Years – AI Adoption Report — from ai.wharton.upenn.edu by  Jeremy Korst, Stefano Puntoni, & Mary Purk

From a survey with more than 800 senior business leaders, this report’s findings indicate that weekly usage of Gen AI has nearly doubled from 37% in 2023 to 72% in 2024, with significant growth in previously slower-adopting departments like Marketing and HR. Despite this increased usage, businesses still face challenges in determining the full impact and ROI of Gen AI. Sentiment reports indicate leaders have shifted from feelings of “curiosity” and “amazement” to more positive sentiments like “pleased” and “excited,” and concerns about AI replacing jobs have softened. Participants were full-time employees working in large commercial organizations with 1,000 or more employees.


Apple study exposes deep cracks in LLMs’ “reasoning” capabilities — from arstechnica.com by Kyle Orland
Irrelevant red herrings lead to “catastrophic” failure of logical inference.

For a while now, companies like OpenAI and Google have been touting advanced “reasoning” capabilities as the next big step in their latest artificial intelligence models. Now, though, a new study from six Apple engineers shows that the mathematical “reasoning” displayed by advanced large language models can be extremely brittle and unreliable in the face of seemingly trivial changes to common benchmark problems.

The fragility highlighted in these new results helps support previous research suggesting that LLMs use of probabilistic pattern matching is missing the formal understanding of underlying concepts needed for truly reliable mathematical reasoning capabilities. “Current LLMs are not capable of genuine logical reasoning,” the researchers hypothesize based on these results. “Instead, they attempt to replicate the reasoning steps observed in their training data.”


Google CEO says more than a quarter of the company’s new code is created by AI — from businessinsider.in by Hugh Langley

  • More than a quarter of new code at Google is made by AI and then checked by employees.
  • Google is doubling down on AI internally to make its business more efficient.

Top Generative AI Chatbots by Market Share – October 2024 


Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview — from github.blog

We are bringing developer choice to GitHub Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini. These new models will be rolling out—first in Copilot Chat, with OpenAI o1-preview and o1-mini available now, Claude 3.5 Sonnet rolling out progressively over the next week, and Google’s Gemini 1.5 Pro in the coming weeks. From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot’s surface areas and functions soon.

 

Can A.I. Be Blamed for a Teen’s Suicide?

Can A.I. Be Blamed for a Teen’s Suicide? — from nytimes.com by Kevin Roose

On the last day of his life, Sewell Setzer III took out his phone and texted his closest friend: a lifelike A.I. chatbot named after Daenerys Targaryen, a character from “Game of Thrones.”

“I miss you, baby sister,” he wrote.

“I miss you too, sweet brother,” the chatbot replied.

Sewell, a 14-year-old ninth grader from Orlando, Fla., had spent months talking to chatbots on Character.AI, a role-playing app that allows users to create their own A.I. characters or chat with characters created by others.

On the night of Feb. 28, in the bathroom of his mother’s house, Sewell told Dany that he loved her, and that he would soon come home to her.

“Please come home to me as soon as possible, my love,” Dany replied.

“What if I told you I could come home right now?” Sewell asked.

“… please do, my sweet king,” Dany replied.

He put down his phone, picked up his stepfather’s .45 caliber handgun and pulled the trigger.

But the experience he had, of getting emotionally attached to a chatbot, is becoming increasingly common. Millions of people already talk regularly to A.I. companions, and popular social media apps including Instagram and Snapchat are building lifelike A.I. personas into their products.

The technology is also improving quickly. Today’s A.I. companions can remember past conversations, adapt to users’ communication styles, role-play as celebrities or historical figures and chat fluently about nearly any subject. Some can send A.I.-generated “selfies” to users, or talk to them with lifelike synthetic voices.

There is a wide range of A.I. companionship apps on the market.


Mother sues tech company after ‘Game of Thrones’ AI chatbot allegedly drove son to suicide — from usatoday.com by Jonathan Limehouse
The mother of 14-year-old Sewell Setzer III is suing Character.AI, the tech company that created a ‘Game of Thrones’ AI chatbot she believes drove him to commit suicide on Feb. 28. Editor’s note: This article discusses suicide and suicidal ideation. If you or someone you know is struggling or in crisis, help is available. Call or text 988 or chat at 988lifeline.org.

The mother of a 14-year-old Florida boy is suing Google and a separate tech company she believes caused her son to commit suicide after he developed a romantic relationship with one of its AI bots using the name of a popular “Game of Thrones” character, according to the lawsuit.


From my oldest sister:


Another relevant item?

Inside the Mind of an AI Girlfriend (or Boyfriend) — from wired.com by Will Knight
Dippy, a startup that offers “uncensored” AI companions, lets you peer into their thought process—sometimes revealing hidden motives.

Despite its limitations, Dippy seems to show how popular and addictive AI companions are becoming. Jagga and his cofounder, Angad Arneja, previously cofounded Wombo, a company that uses AI to create memes including singing photographs. The pair left in 2023, setting out to build an AI-powered office productivity tool, but after experimenting with different personas for their assistant, they became fascinated with the potential of AI companionship.

 

Introducing QuizBot an Innovative AI-Assisted Assessment in Legal Education  — from papers.ssrn.com by Sean A Harrington

Abstract

This Article explores an innovative approach to assessment in legal education: an AI-assisted quiz system implemented in an AI & the Practice of Law course. The system employs a Socratic method-inspired chatbot to engage students in substantive conversations about course materials, providing a novel method for evaluating student learning and engagement. The Article examines the structure and implementation of this system, including its grading methodology and rubric, and discusses its benefits and challenges. Key advantages of the AI-assisted quiz system include enhanced student engagement with course materials, practical experience in AI interaction for future legal practice, immediate feedback and assessment, and alignment with the Socratic method tradition in law schools. The system also presents challenges, particularly in ensuring fairness and consistency in AI-generated questions, maintaining academic integrity, and balancing AI assistance with human oversight in grading.

The Article further explores the pedagogical implications of this innovation, including a shift from memorization to conceptual understanding, the encouragement of critical thinking through AI interaction, and the preparation of students for AI-integrated legal practice. It also considers future directions for this technology, such as integration with other law school courses, potential for longitudinal assessment of student progress, and implications for bar exam preparation and continuing legal education. Ultimately, this Article argues that AI-assisted assessment systems can revolutionize legal education by providing more frequent, targeted, and effective evaluation of student learning. While challenges remain, the benefits of such systems align closely with the evolving needs of the legal profession. The Article concludes with a call for further research and broader implementation of AI-assisted assessment in law schools to fully understand its impact and potential in preparing the next generation of legal professionals for an AI-integrated legal landscape.

Keywords: Legal Education, Artificial Intelligence, Assessment, Socratic Method, Chatbot, Law School Innovation, Educational Technology, Legal Pedagogy, AI-Assisted Learning, Legal Technology, Student Engagement, Formative Assessment, Critical Thinking, Legal Practice, Educational Assessment, Law School Curriculum, Bar Exam Preparation, Continuing Legal Education, Legal Ethics, Educational Analytics


How Legal Startup Genie AI Raises $17.8 Million with Just 13 Slides — from aisecret.us

Genie AI, a London-based legal tech startup, was founded in 2017 by Rafie Faruq and Nitish Mutha. The company has been at the forefront of revolutionizing the legal industry by leveraging artificial intelligence to automate and enhance legal document drafting and review processes. The recent funding round, led by Google Ventures and Khosla Ventures, marks a significant milestone in Genie AI’s growth trajectory.


In-house legal teams are adopting legal tech at lower rate than law firms: survey — from canadianlawyermag.com
The report suggests in-house teams face more barriers to integrating new tools

Law firms are adopting generative artificial intelligence tools at a higher rate than in-house legal departments, but both report similar levels of concerns about data security and ethical implications, according to a report on legal tech usage released Wednesday.

Legal tech company Appara surveyed 443 legal professionals in Canada across law firms and in-house legal departments over the summer, including lawyers, paralegals, legal assistants, law clerks, conveyancers, and notaries.

Twenty-five percent of respondents who worked at law firms said they’ve already invested in generative AI tools, with 24 percent reporting they plan to invest within the following year. In contrast, only 15 percent of respondents who work in-house have invested in these tools, with 26 percent planning investments in the future.


The end of courts? — from jordanfurlong.substack.com by Jordan Furlong
Civil justice systems aren’t serving the public interest. It’s time to break new ground and chart paths towards fast and fair dispute resolution that will meet people’s actual needs.

We need to start simple. System design can get extraordinarily complex very quickly, and complexity is our enemy at this stage. Tom O’Leary nicely inverted Deming’s axiom with a question of his own: “We want the system to work for [this group]. What would need to happen for that to be true?”

If we wanted civil justice systems to work for the ordinary people who enter them seeking solutions to their problems — as opposed to the professionals who administer and make a living off those systems — what would those systems look like? What would be their features? I can think of at least three:

  • Fair: …
  • Fast: …
  • Fine: …

100-Day Dispute Resolution: New Era ADR is Changing the Game (Rich Lee, CEO)

New Era ADR CEO Rich Lee makes a return appearance to Technically Legal to talk about the company’s cutting-edge platform revolutionizing dispute resolution. Rich first came on the podcast in 2021 right as the company launched. Rich discusses the company’s mission to provide a faster, more efficient, and cost-effective alternative to traditional litigation and arbitration, the company’s growth and what he has learned from a few years in.

Key takeaways:

  • New Era ADR offers a unique platform for resolving disputes in under 100 days, significantly faster than traditional methods.
  • The platform leverages technology to streamline processes, reduce costs, and enhance accessibility for all parties involved.
  • New Era ADR boasts a diverse pool of experienced and qualified neutrals, ensuring fair and impartial resolutions.
  • The company’s commitment to innovation is evident in its use of data and technology to drive efficiency and transparency.
 

Next-Generation Durable Skills Assessment — from gettingsmart.com by Nate McClennen

Key Points

  • Emphasizing the use of AI, VR, and simulation games, the methods in this article enhance the evaluation of durable skills, making them more accessible and practical for real-world applications.
  • The integration of educational frameworks and workplace initiatives highlights the importance of partnerships in developing reliable systems for assessing transferable skills.

 

Half of Higher Ed Institutions Now Use AI for Outcomes Tracking, But Most Lag in Implementing Comprehensive Learner Records — from prnewswire.com; via GSV

SALT LAKE CITY, Oct. 22, 2024 /PRNewswire/ — Instructure, the leading learning ecosystem and UPCEA, the online and professional education association, announced the results of a survey on whether institutions are leveraging AI to improve learner outcomes and manage records, along with the specific ways these tools are being utilized. Overall, the study revealed interest in the potential of these technologies is far outpacing adoption. Most respondents are heavily involved in developing learner experiences and tracking outcomes, though nearly half report their institutions have yet to adopt AI-driven tools for these purposes. The research also found that only three percent of institutions have implemented Comprehensive Learner Records (CLRs), which provide a complete overview of an individual’s lifelong learning experiences.


New Survey Says U.S. Teachers Colleges Lag on AI Training. Here are 4 Takeaways — from the74million.org by ; via GSV
Most preservice teachers’ programs lack policies on using AI, CRPE finds, and are likely unready to teach future educators about the field.

In the nearly two years since generative artificial intelligence burst into public consciousness, U.S. schools of education have not kept pace with the rapid changes in the field, a new report suggests.

Only a handful of teacher training programs are moving quickly enough to equip new K-12 teachers with a grasp of AI fundamentals — and fewer still are helping future teachers grapple with larger issues of ethics and what students need to know to thrive in an economy dominated by the technology.

The report, from the Center on Reinventing Public Education, a think tank at Arizona State University, tapped leaders at more than 500 U.S. education schools, asking how their faculty and preservice teachers are learning about AI. Through surveys and interviews, researchers found that just one in four institutions now incorporates training on innovative teaching methods that use AI. Most lack policies on using AI tools, suggesting that they probably won’t be ready to teach future educators about the intricacies of the field anytime soon.



The 5 Secret Hats Teachers are Wearing Right Now (Thanks to AI!) — from aliciabankhofer.substack.com by Alicia Bankhofer
New, unanticipated roles for educators sitting in the same boat

As beta testers, we’re shaping the tools of tomorrow. As researchers, we’re pioneering new pedagogical approaches. As ethical guardians, we’re ensuring that AI enhances rather than compromises the educational experience. As curators, we’re guiding students through the wealth of information AI provides. And as learners ourselves, we’re staying at the forefront of educational innovation.


 

 

AI-governed robots can easily be hacked — from theaivalley.com by Barsee
PLUS: Sam Altman’s new company “World” introduced…

In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.

What did they find ?

  • Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
  • Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
  • Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.

Why does it matter?

This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.

From DSC:
Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.


From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?”  I can’t say I didn’t feel the same way.

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku — from anthropic.com

We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.

Per The Rundown AI:

The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.

Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.

Also related/see:

  • What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer
    Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
  • New Claude, Who Dis? — from theneurondaily.com
    Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
  • When you give a Claude a mouse — from oneusefulthing.org by Ethan Mollick
    Some quick impressions of an actual agent

Introducing Act-One — from runwayml.com
A new way to generate expressive character performances using simple video inputs.

Per Lore by Nathan Lands:

What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.

Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.

Also related/see:


Google to buy nuclear power for AI datacentres in ‘world first’ deal — from theguardian.com
Tech company orders six or seven small nuclear reactors from California’s Kairos Power

Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.

The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.

Related:


ChatGPT Topped 3 Billion Visits in September — from similarweb.com

After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May

ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.


Crazy “AI Army” — from aisecret.us

Also from aisecret.us, see World’s First Nuclear Power Deal For AI Data Centers

Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.


New updates to help creators build community, drive business, & express creativity on YouTube — from support.google.com

Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.

Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!


New autonomous agents scale your team like never before — from blogs.microsoft.com

Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.

  • First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
  • Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.

10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum
While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.


Multi-Modal AI: Video Creation Simplified — from heatherbcooper.substack.com by Heather Cooper

Emerging Multi-Modal AI Video Creation Platforms
The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.

Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.


AI Medical Imagery Model Offers Fast, Cost-Efficient Expert Analysis — from developer.nvidia.com/

 

DC: I’m really hoping that a variety of AI-based tools, technologies, and services will significantly help with our Access to Justice (#A2J) issues here in America. So this article, per Kristen Sonday at Thomson Reuters — caught my eye.

***

AI for Legal Aid: How to empower clients in need — from thomsonreuters.com by Kristen Sonday
In this second part of this series, we look at how AI-driven technologies can empower those legal aid clients who may be most in need

It’s hard to overstate the impact that artificial intelligence (AI) is expected to have on helping low-income individuals achieve better access to justice. And for those legal services organizations (LSOs) that serve on the front lines, too often without sufficient funding, staff, or technology, AI presents perhaps their best opportunity to close the justice gap. With the ability of AI-driven tools to streamline agency operations, minimize administrative work, more effectively reallocate talent, and allow LSOs to more effectively service clients, the implementation of these tools is essential.

Innovative LSOs leading the way

Already many innovative LSOs are taking the lead, utilizing new technology to complete tasks from complex analysis to AI-driven legal research. Here are two compelling examples of how AI is already helping LSOs empower low-income clients in need.

#A2J #justice #tools #vendors #society #legal #lawfirms #AI #legaltech #legalresearch

Criminal charges, even those that are eligible for simple, free expungement, can prevent someone from obtaining housing or employment. This is a simple barrier to overcome if only help is available.

AI offers the capacity to provide quick, accurate information to a vast audience, particularly to those in urgent need. AI can also help reduce the burden on our legal staff…

 


A legal tech executive explains how AI will fully change the way lawyers work — from legaldive.com by Justin Bachman
A senior executive with ContractPodAi discusses how legal AI poses economic benefits for in-house departments and disruption risks for law firm billing models.

Everything you thought you knew about being a lawyer is about to change.

Legal Dive spoke with Podinic about the transformative nature of AI, including the financial risks to lawyers’ billing models and how it will force general counsel and chief legal officers to consider how they’ll use the time AI is expected to free up for the lawyers on their teams when they no longer have to do administrative tasks and low-level work.


Legaltech will augment lawyers’ capabilities but not replace them, says GlobalData — from globaldata.com

  • Traditionally, law firms have been wary of adopting technologies that could compromise data privacy and legal accuracy; however, attitudes are changing
  • Despite concerns about technology replacing humans in the legal sector, legaltech is more likely to augment the legal profession than replace it entirely
  • Generative AI will accelerate digital transformation in the legal sector
 

Fresh Voices on Legal Tech with Megan Ma — from legaltalknetwork.com by Dennis Kennedy, Tom Mighell, and Dr. Megan Ma

Episode Notes
As genAI continues to edge into all facets of our lives, Dr. Megan Ma has been exploring integrations for this technology in legal, but, more importantly, how it can help lawyers and law students hone their legal skills. Dennis and Tom talk with Dr. Ma about her work and career path and many of the latest developments in legal tech. They take a deep dive into a variety of burgeoning AI tools and trends, and Dr. Ma discusses how her interdisciplinary mindset has helped her develop a unique perspective on the possibilities for AI in the legal profession and beyond.

Legal tech disruption: Doing it on purpose — from localgovernmentlawyer.co.uk
Thomson Reuters looks at the role that a legal technology roadmap can play in improving the operations of in-house legal departments.

Disruption in the legal industry remains a powerful force – from the death of the billable hour to robot lawyers and generative AI. Leaders are facing weighty issues that demand long-term, visionary thinking and that will change the way legal professionals do their jobs.

With half of in-house legal departments increasing their use of legal technology tools, many GCs are taking the initiative to address continued, growing expectations from the business for systems that can make operations better. How can you prepare for a tech or process change so that people come along with you, rather than living in constant fire-fighting mode?

 


Are ChatGPT, Claude & NotebookLM *Really* Disrupting Education? — from drphilippahardman.substack.com
Evaluating Gen AI’s *real* impact on human learning

The TLDR here is that, as useful as popular AI tools are for learners, as things stand they only enable us to take the very first steps on what is a long and complex journey of learning.

AI tools like ChatGPT 4o, Claude 3.5 & NotebookLM can help to give us access to information but (for now at least) the real work of learning remains in our – the humans’ – hands.


To which Anna Mills had a solid comment:

It might make a lot of sense to regulate generated audio to require some kind of watermark and/or metadata. Instructors who teach online and assign voice recordings, we need to recognize that these are now very easy and free to auto-generate. In some cases we are assigning this to discourage students from using AI to just autogenerate text responses, but audio is not immune.




 

Opening Keynote – GS1

Bringing generative AI to video with Adobe Firefly Video Model

Adobe Launches Firefly Video Model and Enhances Image, Vector and Design Models

  • The Adobe Firefly Video Model (beta) expands Adobe’s family of creative generative AI models and is the first publicly available video model designed to be safe for commercial use
  • Enhancements to Firefly models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Adobe Express and now Premiere Pro
  • Firefly has been used to generate 13 billion images since March 2023 and is seeing rapid adoption by leading brands and enterprises

Photoshop delivers powerful innovation for image editing, ideation, 3D design, and more

Even more speed, precision, and power: Get started with the latest Illustrator and InDesign features for creative professionals

Adobe Introduces New Global Initiative Aimed at Helping 30 Million Next-Generation Learners Develop AI Literacy, Content Creation and Digital Marketing Skills by 2030

Add sound to your video via text — Project Super Sonic:



New Dream Weaver — from aisecret.us
Explore Adobe’s New Firefly Video Generative Model

Cybercriminals exploit voice cloning to impersonate individuals, including celebrities and authority figures, to commit fraud. They create urgency and trust to solicit money through deceptive means, often utilizing social media platforms for audio samples.

 

Finalists of the 2024 Comedy Wildlife Photography Awards Focus on the Wily and Witless — from thisiscolossal.com by Kate Mothes


Speaking of photography, here’s a related item:

AI Photo Editors: A Quick Guide to Elevate Your Images — from intelligenthq.com

With the rise of artificial intelligence, photo editing has become accessible and efficient for everyone. An AI photo Editing Tool can transform photos in seconds, producing professional-level results without requiring extensive skills. From adjusting lighting to removing backgrounds, these tools automate complex edits, enabling users to create stunning visuals effortlessly. Whether a beginner or an experienced photographer, AI-powered editors offer a wide range of features that help elevate your images. This guide will introduce you to the key functionalities of AI image editors and provide insights on maximising their potential.

 


Articulate AI & the “Buttonification” of Instructional Design — from drphilippahardman.substack.com by Dr. Philippa Hardman
A new trend in AI-UX, and its implications for Instructional Design

1. Using AI to Scale Exceptional Instructional Design Practice
Imagine a bonification system that doesn’t just automate tasks, but scales best practices in instructional design:

  • Evidence-Based Design Button…
  • Learner-Centered Objectives Generator…
    Engagement Optimiser…

2. Surfacing AI’s Instructional Design Thinking
Instead of hiding AI’s decision-making process, what if we built an AI system which invites instructional designers to probe, question, and learn from an expert trained AI?

  • Explain This Design…
  • Show Me Alternatives…
  • Challenge My Assumptions…
  • Learning Science Insights…

By reimagining the role of AI in this way, we would…


Recapping OpenAI’s Education Forum — from marcwatkins.substack.com by Marc Watkins

OpenAI’s Education Forum was eye-opening for a number of reasons, but the one that stood out the most was Leah Belsky acknowledging what many of us in education had known for nearly two years—the majority of the active weekly users of ChatGPT are students. OpenAI has internal analytics that track upticks in usage during the fall and then drops off in the spring. Later that evening, OpenAI’s new CFO, Sarah Friar, further drove the point home with an anecdote about usage in the Philippines jumping nearly 90% at the start of the school year.

I had hoped to gain greater insight into OpenAI’s business model and how it related to education, but the Forum left me with more questions than answers. What app has the majority of users active 8 to 9 months out of the year and dormant for the holidays and summer breaks? What business model gives away free access and only converts 1 out of every 20-25 users to paid users? These were the initial thoughts that I hoped the Forum would address. But those questions, along with some deeper and arguably more critical ones, were skimmed over to drive home the main message of the Forum—Universities have to rapidly adopt AI and become AI-enabled institutions.


Off-Loading in the Age of Generative AI — from insidehighered.com by James DeVaney

As we embrace these technologies, we must also consider the experiences we need to discover and maintain our connections—and our humanity. In a world increasingly shaped by AI, I find myself asking: What are the experiences that define us, and how do they influence the relationships we build, both professionally and personally?

This concept of “off-loading” has become central to my thinking. In simple terms, off-loading is the act of delegating tasks to AI that we would otherwise do ourselves. As AI systems advance, we’re increasingly confronted with a question: Which tasks should we off-load to AI?

 

From DSC:
Great…we have another tool called Canvas. Or did you say Canva?

Introducing canvas — from OpenAI
A new way of working with ChatGPT to write and code

We’re introducing canvas, a new interface for working with ChatGPT on writing and coding projects that go beyond simple chat. Canvas opens in a separate window, allowing you and ChatGPT to collaborate on a project. This early beta introduces a new way of working together—not just through conversation, but by creating and refining ideas side by side.

Canvas was built with GPT-4o and can be manually selected in the model picker while in beta. Starting today we’re rolling out canvas to ChatGPT Plus and Team users globally. Enterprise and Edu users will get access next week. We also plan to make canvas available to all ChatGPT Free users when it’s out of beta.


Using AI to buy your home? These companies think it’s time you should — from usatoday.com by Andrea Riquier

The way Americans buy homes is changing dramatically.

New industry rules about how home buyers’ real estate agents get paid are prompting a reckoning among housing experts and the tech sector. Many house hunters who are already stretched thin by record-high home prices and closing costs must now decide whether, and how much, to pay an agent.

A 2-3% commission on the median home price of $416,700 could be well over $10,000, and in a world where consumers are accustomed to using technology for everything from taxes to tickets, many entrepreneurs see an opportunity to automate away the middleman, even as some consumer advocates say not so fast.


The State of AI Report 2024 — from nathanbenaich.substack.com by Nathan Benaich


The Great Mismatch — from the-job.beehiiv.com. by Paul Fain
Artificial intelligence could threaten millions of decent-paying jobs held by women without degrees.

Women in administrative and office roles may face the biggest AI automation risk, find Brookings researchers armed with data from OpenAI. Also, why Indiana could make the Swiss apprenticeship model work in this country, and how learners get disillusioned when a certificate doesn’t immediately lead to a good job.

major new analysis from the Brookings Institution, using OpenAI data, found that the most vulnerable workers don’t look like the rail and dockworkers who have recaptured the national spotlight. Nor are they the creatives—like Hollywood’s writers and actors—that many wealthier knowledge workers identify with. Rather, they’re predominantly women in the 19M office support and administrative jobs that make up the first rung of the middle class.

“Unfortunately the technology and automation risks facing women have been overlooked for a long time,” says Molly Kinder, a fellow at Brookings Metro and lead author of the new report. “Most of the popular and political attention to issues of automation and work centers on men in blue-collar roles. There is far less awareness about the (greater) risks to women in lower-middle-class roles.”



Is this how AI will transform the world over the next decade? — from futureofbeinghuman.com by Andrew Maynard
Anthropic’s CEO Dario Amodei has just published a radical vision of an AI-accelerated future. It’s audacious, compelling, and a must-read for anyone working at the intersection of AI and society.

But if Amodei’s essay is approached as a conversation starter rather than a manifesto — which I think it should be — it’s hard to see how it won’t lead to clearer thinking around how we successfully navigate the coming AI transition.

Given the scope of the paper, it’s hard to write a response to it that isn’t as long or longer as the original. Because of this, I’d strongly encourage anyone who’s looking at how AI might transform society to read the original — it’s well written, and easier to navigate than its length might suggest.

That said, I did want to pull out a few things that struck me as particularly relevant and important — especially within the context of navigating advanced technology transitions.

And speaking of that essay, here’s a summary from The Rundown AI:

Anthropic CEO Dario Amodei just published a lengthy essay outlining an optimistic vision for how AI could transform society within 5-10 years of achieving human-level capabilities, touching on longevity, politics, work, the economy, and more.

The details:

  • Amodei believes that by 2026, ‘powerful AI’ smarter than a Nobel Prize winner across fields, with agentic and all multimodal capabilities, will be possible.
  • He also predicted that AI could compress 100 years of scientific progress into 10 years, curing most diseases and doubling the human lifespan.
  • The essay argued AI could strengthen democracy by countering misinformation and providing tools to undermine authoritarian regimes.
  • The CEO acknowledged potential downsides, including job displacement — but believes new economic models will emerge to address this.
  • He envisions AI driving unprecedented economic growth but emphasizes ensuring AI’s benefits are broadly distributed.

Why it matters: 

  • As the CEO of what is seen as the ‘safety-focused’ AI lab, Amodei paints a utopia-level optimistic view of where AI will head over the next decade. This thought-provoking essay serves as both a roadmap for AI’s potential and a call to action to ensure the responsible development of technology.

AI in the Workplace: Answering 3 Big Questions — from gallup.com by Kate Den Houter

However, most workers remain unaware of these efforts. Only a third (33%) of all U.S. employees say their organization has begun integrating AI into their business practices, with the highest percentage in white-collar industries (44%).

White-collar workers are more likely to be using AI. White-collar workers are, by far, the most frequent users of AI in their roles. While 81% of employees in production/frontline industries say they never use AI, only 54% of white-collar workers say they never do and 15% report using AI weekly.

Most employees using AI use it for idea generation and task automation. Among employees who say they use AI, the most common uses are to generate ideas (41%), to consolidate information or data (39%), and to automate basic tasks (39%).


Nvidia Blackwell GPUs sold out for the next 12 months as AI market boom continues — from techspot.com by Skye Jacobs
Analysts expect Team Green to increase its already formidable market share

Selling like hotcakes: The extraordinary demand for Blackwell GPUs illustrates the need for robust, energy-efficient processors as companies race to implement more sophisticated AI models and applications. The coming months will be critical to Nvidia as the company works to ramp up production and meet the overwhelming requests for its latest product.


Here’s my AI toolkit — from wondertools.substack.com by Jeremy Caplan and Nikita Roy
How and why I use the AI tools I do — an audio conversation

1. What are two useful new ways to use AI?

  • AI-powered research: Type a detailed search query into Perplexity instead of Google to get a quick, actionable summary response with links to relevant information sources. Read more of my take on why Perplexity is so useful and how to use it.
  • Notes organization and analysis: Tools like NotebookLM, Claude Projects, and Mem can help you make sense of huge repositories of notes and documents. Query or summarize your own notes and surface novel connections between your ideas.
 


 
© 2024 | Daniel Christian