AI-governed robots can easily be hacked — from theaivalley.com by Barsee
PLUS: Sam Altman’s new company “World” introduced…

In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.

What did they find ?

  • Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
  • Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
  • Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.

Why does it matter?

This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.

From DSC:
Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.


From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?”  I can’t say I didn’t feel the same way.

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku — from anthropic.com

We’re also introducing a groundbreaking new capability in public beta: computer use. Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.

Per The Rundown AI:

The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.

Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.

Also related/see:

  • What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer
    Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
  • New Claude, Who Dis? — from theneurondaily.com
    Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
  • When you give a Claude a mouse — from oneusefulthing.org by Ethan Mollick
    Some quick impressions of an actual agent

Introducing Act-One — from runwayml.com
A new way to generate expressive character performances using simple video inputs.

Per Lore by Nathan Lands:

What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.

Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.

Also related/see:


Google to buy nuclear power for AI datacentres in ‘world first’ deal — from theguardian.com
Tech company orders six or seven small nuclear reactors from California’s Kairos Power

Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.

The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.

Related:


ChatGPT Topped 3 Billion Visits in September — from similarweb.com

After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May

ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.


Crazy “AI Army” — from aisecret.us

Also from aisecret.us, see World’s First Nuclear Power Deal For AI Data Centers

Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.


New updates to help creators build community, drive business, & express creativity on YouTube — from support.google.com

Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.

Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!


New autonomous agents scale your team like never before — from blogs.microsoft.com

Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.

  • First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
  • Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.

10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum
While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.


Multi-Modal AI: Video Creation Simplified — from heatherbcooper.substack.com by Heather Cooper

Emerging Multi-Modal AI Video Creation Platforms
The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.

Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.


AI Medical Imagery Model Offers Fast, Cost-Efficient Expert Analysis — from developer.nvidia.com/

 

8 Legal Tech Trends Transforming Practice in 2024 — from lawyer-monthly.com

Thanks to rapid advances in technology, the entire scenario within the legal landscape is changing fast. Fast forward to 2024, and legal tech integration would be the lifeblood of any law firm or legal department if it wishes to stay within the competitive fray.

Innovations such as AI-driven tools for research to blockchain-enabled contracts are thus not only guideline highlights of legal work today. Understanding and embracing these trends will be vital to surviving and thriving in law as the revolution gains momentum and the sands of the world of legal practice continue to shift.

Below are the eight expected trends in legal tech defining the future legal practice.


Building your legal practice’s AI future: Understanding the actual technologies — from thomsonreuters.com by
The implementation of a successful AI strategy for a law firm depends not only on having the right people, but also understanding the tech and how to make it work for the firm

While we’re not delving deep here into how generative artificial intelligence (GenAI) and large language models (LLMs) work, we will talk generally about different categories of tech and emerging GenAI functionalities that are specific for legal.


Ex-Microsoft engineers raise $25M for legal tech startup that uses AI to help lawyers analyze data — from geekwire.com by Taylor Soper

Supio, a Seattle startup founded in 2021 by longtime friends and former Microsoft engineers, raised a $25 million Series A investment to supercharge its software platform designed to help lawyers quickly sort, search, and organize case-related data.

Supio focuses on cases related to personal injury and mass tort plaintiff law (when many plaintiffs file a claim). It specializes in organizing unstructured data and letting lawyers use a chatbot to pull relevant information.

“Most lawyers are data-rich and time-starved, but Supio automates time-sapping manual processes and empowers them to identify critical information to prove and expedite their cases,” Supio CEO and co-founder Jerry Zhou said in a statement.


ILTACON 2024: Large law firms are moving carefully but always forward with their GenAI strategy — from thomsonreuters.com by Zach Warren

NASHVILLE — As the world approaches the two-year mark since the original introduction of OpenAI’s ChatGPT, law firms already have made in-roads into establishing generative artificial intelligence (GenAI) as a part of their firms. Whether for document and correspondence drafting, summarization of meetings and contracts, legal research, or for back-office capabilities, firms have been playing around with a number of use cases to see where the technology may fit into the future.


Thomson Reuters acquires pre-revenue legal LLM developer Safe Sign Technologies – Here’s why — from legaltechnology.com by Caroline Hill

Thomson Reuters announced (on August 21) it has made the somewhat unusual acquisition of UK pre-revenue startup Safe Sign Technologies (SST), which is developing legal-specific large language models (LLMs) and as of just eight months ago was operating in stealth mode.

There isn’t an awful lot of public information available about the company but speaking to Legal IT Insider about the acquisition, Hron explained that SST is focused in part on deep learning research as it pertains to training large language models and specifically legal large language models. The company as yet has no customers and has been focusing exclusively on developing the technology and the models.


Supio brings generative AI to personal injury cases — from techcrunch.com by Kyle Wiggers

Legal work is incredibly labor- and time-intensive, requiring piecing together cases from vast amounts of evidence. That’s driving some firms to pilot AI to streamline certain steps; according to a 2023 survey by the American Bar Association, 35% of law firms now use AI tools in their practice.

OpenAI-backed Harvey is among the big winners so far in the burgeoning AI legal tech space, alongside startups such as Leya and Klarity. But there’s room for one more, says Jerry Zhou and Kyle Lam, the co-founders of an AI platform for personal injury law called Supio, which emerged from stealth Tuesday with a $25 million investment led by Sapphire Ventures.

Supio uses generative AI to automate bulk data collection and aggregation for legal teams. In addition to summarizing info, the platform can organize and identify files — and snippets within files — that might be useful in outlining, drafting and presenting a case, Zhou said.


 

ILTACON 2024: Selling legal tech’s monorail — from abajournal.com by Nicole Black

The bottom line: The promise of GenAI for our profession is great, but all signs point to the realization of its potential being six months out or more. So the question remains: Will generative AI change the legal landscape, ushering in an era of frictionless, seamless legal work? Or have we reached the pinnacle of its development, left only with empty promises? I think it’s the former since there is so much potential, and many companies are investing significantly in AI development, but only time will tell.


From LegalZoom to AI-Powered Platforms: The Rise of Smart Legal Services — from tmcnet.com by Artem Vialykh

In today’s digital age, almost every industry is undergoing a transformation driven by technological innovation, and the legal field is no exception. Traditional legal services, often characterized by high fees, time-consuming processes, and complex paperwork, are increasingly being challenged by more accessible, efficient, and cost-effective alternatives.

LegalZoom, one of the pioneers in offering online legal services, revolutionized the way individuals and small businesses accessed legal assistance. However, with the advent of artificial intelligence (AI) and smart technologies, we are witnessing the rise of even more sophisticated platforms that are poised to reshape the legal landscape further.

The Rise of AI-Powered Legal Platforms
AI-powered legal platforms represent the next frontier in legal services. These platforms leverage the power of artificial intelligence, machine learning, and natural language processing to provide legal services that are not only more efficient but also more accurate and tailored to the needs of the user.

AI-powered platforms offer many advantages, with one of them being their ability to rapidly process and analyze large amounts of data quickly. This capability allows them to provide users with precise legal advice and document generation in a fraction of the time it would take a human attorney. For example, AI-driven platforms can review and analyze contracts, identify potential legal risks, and even suggest revisions, all in real-time. This level of automation significantly reduces the time and cost associated with traditional legal services.


AI, Market Dynamics, and the Future of Legal Services with Harbor’s Zena Applebaum — from geeklawblog.com by Greg Lambert

Zena talks about the integration of generative AI (Gen AI) into legal research tools, particularly at Thomson Reuters, where she previously worked. She emphasizes the challenges in managing expectations around AI’s capabilities while ensuring that the products deliver on their promises. The legal industry has high expectations for AI to simplify the time-consuming and complex nature of legal research. However, Applebaum highlights the need for balance, as legal research remains inherently challenging, and overpromising on AI’s potential could lead to dissatisfaction among users.

Zena shares her outlook on the future of the legal industry, particularly the growing sophistication of in-house legal departments and the increasing competition for legal talent. She predicts that as AI continues to enhance efficiency and drive changes in the industry, the demand for skilled legal professionals will rise. Law firms will need to adapt to these shifts by embracing new technologies and rethinking their strategies to remain competitive in a rapidly evolving market.


Future of the Delivery of Legal Services — from americanbar.org
The legal profession is in the midst of unprecedented change. Learn what might be next for the industry and your bar.


What. Just. Happened? (Post-ILTACon Emails Week of 08-19-2024) — from geeklawblog.com by Greg Lambert

Here’s this week’s edition of What. Just. Happened? Remember, you can track these daily with the AI Lawyer Talking Tech podcast (Spotify or Apple) which covers legal tech news and summarizes stories.


From DSC:
And although this next one is not necessarily legaltech-related, I wanted to include it here anyway — as I’m
always looking to reduce the costs of obtaining a degree.

Improve the Diversity of the Profession By Addressing the Costs of Becoming a Lawyer — from lssse.indiana.edu by Joan Howarth

Not surprisingly, then, research shows that economic assets are a significant factor in bar passage. And LSSSE research shows us the connections between the excessive expense of becoming a lawyer and the persistent racial and ethnic disparities in bar passage rate.

The racial and ethnic bar passage disparities are extreme. For example, the national ABA statistics for first time passers in 2023-24 show White candidates passing at 83%, compared to Black candidates (57%) with Asians and Hispanics in the middle (75% and 69%, respectively).

These disturbing figures are very related to the expense of becoming a lawyer.

Finally, though, after decades of stability — or stagnation — in attorney licensing, change is here. And some of the changes, such as the new pathway to licensure in Oregon based on supervised practice instead of a traditional bar exam, or the Nevada Plan in which most of the requirements can be satisfied during law school, should significantly decrease the costs of licensure and add flexibility for candidates with responsibilities beyond studying for a bar exam.  These reforms are long overdue.


Thomson Reuters acquires Safe Sign Technologies — from legaltechnology.com by Caroline Hill

Thomson Reuters today (21 August) announced it has acquired Safe Sign Technologies (SST), a UK-based startup that is developing legal-specific large language models (LLMs) and as of just eight months ago was operating in stealth mode.

 

Welcome to the Digital Writing Lab -- Supporting teachers to develop and empower digitally literate citizens.

Digital Writing Lab

About this Project

The Digital Writing Lab is a key component of the Australian national Teaching Digital Writing project, which runs from 2022-2025.

This stage of the broader project involves academic and secondary English teacher collaboration to explore how teachers are conceptualising the teaching of digital writing and what further supports they may need.

Previous stages of the project included archival research reviewing materials related to digital writing in Australia’s National Textbook Collection, and a national survey of secondary English teachers. You can find out more about the whole project via the project blog.

Who runs the project?

Project Lead Lucinda McKnight is an Associate Professor and Australian Research Council (ARC) DECRA Fellow researching how English teachers can connect the teaching of writing to contemporary media and students’ lifeworlds.

She is working with Leon Furze, who holds the doctoral scholarship attached to this project, and Chris Zomer, the project Research Fellow. The project is located in the Research for Educational Impact (REDI) centre at Deakin University, Melbourne.

.

Teaching Digital Writing is a research project about English today.

 

Learning Engineering: New Profession or Transformational Process? A Q&A with Ellen Wagner — from campustechnology.com by Mary Grush and Ellen Wagner

“Learning is one of the most personal things that people do; engineering provides problem-solving methods to enable learning at scale. How do we resolve this paradox? 

—Ellen Wagner

Wagner: Learning engineering offers us a process for figuring that out! If we think of learning engineering as a process that can transform research results into learning action there will be evidence to guide that decision-making at each point in the value chain. I want to get people to think of learning engineering as a process for applying research in practice settings, rather than as a professional identity. And by that I mean that learning engineering is a bigger process than what any one person can do on their own.


From DSC:
Instructional Designers, Learning Experience Designers, Professors, Teachers, and Directors/Staff of Teaching & Learning  Centers will be interested in this article. It made me think of the following graphic I created a while back:
.

We need to take more of the research from learning science and apply it in our learning spaces.

 

The Musician’s Rule and GenAI in Education — from opencontent.org by David Wiley

We have to provide instructors the support they need to leverage educational technologies like generative AI effectively in the service of learning. Given the amount of benefit that could accrue to students if powerful tools like generative AI were used effectively by instructors, it seems unethical not to provide instructors with professional development that helps them better understand how learning occurs and what effective teaching looks like. Without more training and support for instructors, the amount of student learning higher education will collectively “leave on the table” will only increase as generative AI gets more and more capable. And that’s a problem.

From DSC:
As is often the case, David put together a solid posting here. A few comments/reflections on it:

  • I agree that more training/professional development is needed, especially regarding generative AI. This would help achieve a far greater ROI and impact.
  • The pace of change makes it difficult to see where the sand is settling…and thus what to focus on
  • The Teaching & Learning Groups out there are also trying to learn and grow in their knowledge (so that they can train others)
  • The administrators out there are also trying to figure out what all of this generative AI stuff is all about; and so are the faculty members. It takes time for educational technologies’ impact to roll out and be integrated into how people teach.
  • As we’re talking about multiple disciplines here, I think we need more team-based content creation and delivery.
  • There needs to be more research on how best to use AI — again, it would be helpful if the sand settled a bit first, so as not to waste time and $$. But then that research needs to be piped into the classrooms far better.
    .

We need to take more of the research from learning science and apply it in our learning spaces.

 

How Humans Do (and Don’t) Learn— from drphilippahardman.substack.com by Dr. Philippa Hardman
One of the biggest ever reviews of human behaviour change has been published, with some eye-opening implications for how we design & deliver learning experiences

Excerpts (emphasis DSC):

This month, researchers from the University of Pennsylvania published one of the biggest ever reviews of behaviour change efforts – i.e. interventions which do (and don’t) lead to behavioural change in humans.

Research into human behaviour change suggests that, in order to impact capability in real, measurable terms, we need to rethink how we typically design and deliver training.

The interventions which we use most frequently to behaviour change – such as video + quiz approaches and one off workshops – have a negligible impact on measurable changes in human behaviour.

For learning professionals who want to change how their learners think and behave, this research shows conclusively the central importance of:

    1. Shifting attention away from the design of content to the design of context.
    2. Delivering sustained cycles of contextualised practice, support & feedback.

 

 

Introducing Perplexity Pages — from perplexity.ai
You’ve used Perplexity to search for answers, explore new topics, and expand your knowledge. Now, it’s time to share what you learned.

Meet Perplexity Pages, your new tool for easily transforming research into visually stunning, comprehensive content. Whether you’re crafting in-depth articles, detailed reports, or informative guides, Pages streamlines the process so you can focus on what matters most: sharing your knowledge with the world.

Seamless creation
Pages lets you effortlessly create, organize, and share information. Search any topic, and instantly receive a well-structured, beautifully formatted article. Publish your work to our growing library of user-generated content and share it directly with your audience with a single click.

A tool for everyone
Pages is designed to empower creators in any field to share knowledge.

  • Educators: Develop comprehensive study guides for your students, breaking down complex topics into easily digestible content.

  • Researchers: Create detailed reports on your findings, making your work more accessible to a wider audience.

  • Hobbyists: Share your passions by creating engaging guides that inspire others to explore new interests.

 

How to Make the Dream of Education Equity (or Most of It) a Reality — from nataliewexler.substack.com by Natalie Wexler
Studies on the effects of tutoring–by humans or computers–point to ways to improve regular classroom instruction.

One problem, of course, is that it’s prohibitively expensive to hire a tutor for every average or struggling student, or even one for every two or three of them. This was the two-sigma “problem” that Bloom alluded to in the title of his essay: how can the massive benefits of tutoring possibly be scaled up? Both Khan and Zuckerberg have argued that the answer is to have computers, maybe powered by artificial intelligence, serve as tutors instead of humans.

From DSC:
I’m hoping that AI-backed learning platforms WILL help many people of all ages and backgrounds. But I realize — and appreciate what Natalie is saying here as well — that human beings are needed in the learning process (especially at younger ages). 

But without the human element, that’s unlikely to be enough. Students are more likely to work hard to please a teacher than to please a computer.

Natalie goes on to talk about training all teachers in cognitive science — a solid idea for sure. That’s what I was trying to get at with this graphic:
.

We need to take more of the research from learning science and apply it in our learning spaces.

.
But I’m not as hopeful in all teachers getting trained in cognitive science…as it should have happened (in the Schools of Education and in the K12 learning ecosystem at large) by now. Perhaps it will happen, given enough time.

And with more homeschooling and blended programs of education occurring, that idea gets stretched even further. 

K-12 Hybrid Schooling Is in High Demand — from realcleareducation.com by Keri D. Ingraham (emphasis below from DSC); via GSV

Parents are looking for a different kind of education for their children. A 2024 poll of parents reveals that 72% are considering, 63% are searching for, and 44% have selected a new K-12 school option for their children over the past few years. So, what type of education are they seeking?

Additional polling data reveals that 49% of parents would prefer their child learn from home at least one day a week. While 10% want full-time homeschooling, the remaining 39% of parents desire their child to learn at home one to four days a week, with the remaining days attending school on-campus. Another parent poll released this month indicates that an astonishing 64% of parents indicated that if they were looking for a new school for their child, they would enroll him or her in a hybrid school.

 

GTC March 2024 Keynote with NVIDIA CEO Jensen Huang


Also relevant/see:




 


[Report] Generative AI Top 150: The World’s Most Used AI Tools (Feb 2024) — from flexos.work by Daan van Rossum
FlexOS.work surveyed Generative AI platforms to reveal which get used most. While ChatGPT reigns supreme, countless AI platforms are used by millions.

As the FlexOS research study “Generative AI at Work” concluded based on a survey amongst knowledge workers, ChatGPT reigns supreme.

2. AI Tool Usage is Way Higher Than People Expect – Beating Netflix, Pinterest, Twitch.
As measured by data analysis platform Similarweb based on global web traffic tracking, the AI tools in this list generate over 3 billion monthly visits.

With 1.67 billion visits, ChatGPT represents over half of this traffic and is already bigger than Netflix, Microsoft, Pinterest, Twitch, and The New York Times.

.


Artificial Intelligence Act: MEPs adopt landmark law — from europarl.europa.eu

  • Safeguards on general purpose artificial intelligence
  • Limits on the use of biometric identification systems by law enforcement
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities
  • Right of consumers to launch complaints and receive meaningful explanations


The untargeted scraping of facial images from CCTV footage to create facial recognition databases will be banned © Alexander / Adobe Stock


A New Surge in Power Use Is Threatening U.S. Climate Goals — from nytimes.com by Brad Plumer and Nadja Popovich
A boom in data centers and factories is straining electric grids and propping up fossil fuels.

Something unusual is happening in America. Demand for electricity, which has stayed largely flat for two decades, has begun to surge.

Over the past year, electric utilities have nearly doubled their forecasts of how much additional power they’ll need by 2028 as they confront an unexpected explosion in the number of data centers, an abrupt resurgence in manufacturing driven by new federal laws, and millions of electric vehicles being plugged in.


OpenAI and the Fierce AI Industry Debate Over Open Source — from bloomberg.com by Rachel Metz

The tumult could seem like a distraction from the startup’s seemingly unending march toward AI advancement. But the tension, and the latest debate with Musk, illuminates a central question for OpenAI, along with the tech world at large as it’s increasingly consumed by artificial intelligence: Just how open should an AI company be?

The meaning of the word “open” in “OpenAI” seems to be a particular sticking point for both sides — something that you might think sounds, on the surface, pretty clear. But actual definitions are both complex and controversial.


Researchers develop AI-driven tool for near real-time cancer surveillance — from medicalxpress.com by Mark Alewine; via The Rundown AI
Artificial intelligence has delivered a major win for pathologists and researchers in the fight for improved cancer treatments and diagnoses.

In partnership with the National Cancer Institute, or NCI, researchers from the Department of Energy’s Oak Ridge National Laboratory and Louisiana State University developed a long-sequenced AI transformer capable of processing millions of pathology reports to provide experts researching cancer diagnoses and management with exponentially more accurate information on cancer reporting.


 

Immersive virtual reality tackles depression stigma says study — from inavateonthenet.net

A new study from the University of Tokyo has highlighted the positive effect that immersive virtual reality experiences have for depression anti-stigma and knowledge interventions compared to traditional video.

The study found that depression knowledge improved for both interventions, however, only the immersive VR intervention reduced stigma. The VR-powered intervention saw depression knowledge score positively associated with a neural response in the brain that is indicative of empathetic concern. The traditional video intervention saw the inverse, with participants demonstrating a brain-response which suggests a distress-related response.

From DSC:
This study makes me wonder why we haven’t heard of more VR-based uses in diversity training. I’m surprised we haven’t heard of situations where we are put in someone else’s mocassins so to speak. We could have a lot more empathy for someone — and better understand their situation — if we were to experience life as others might experience it. In the process, we would likely uncover some hidden biases that we have.


Addendum on 3/12/24:

Augmented reality provides benefit for Parkinson’s physical therapy — from inavateonthenet.net

 

How AI Is Already Transforming the News Business — from politico.com by Jack Shafer
An expert explains the promise and peril of artificial intelligence.

The early vibrations of AI have already been shaking the newsroom. One downside of the new technology surfaced at CNET and Sports Illustrated, where editors let AI run amok with disastrous results. Elsewhere in news media, AI is already writing headlines, managing paywalls to increase subscriptions, performing transcriptions, turning stories in audio feeds, discovering emerging stories, fact checking, copy editing and more.

Felix M. Simon, a doctoral candidate at Oxford, recently published a white paper about AI’s journalistic future that eclipses many early studies. Swinging a bat from a crouch that is neither doomer nor Utopian, Simon heralds both the downsides and promise of AI’s introduction into the newsroom and the publisher’s suite.

Unlike earlier technological revolutions, AI is poised to change the business at every level. It will become — if it already isn’t — the beginning of most story assignments and will become, for some, the new assignment editor. Used effectively, it promises to make news more accurate and timely. Used frivolously, it will spawn an ocean of spam. Wherever the production and distribution of news can be automated or made “smarter,” AI will surely step up. But the future has not yet been written, Simon counsels. AI in the newsroom will be only as bad or good as its developers and users make it.

Also see:

Artificial Intelligence in the News: How AI Retools, Rationalizes, and Reshapes Journalism and the Public Arena — from cjr.org by Felix Simon

TABLE OF CONTENTS



EMO: Emote Portrait Alive – Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions — from humanaigc.github.io Linrui Tian, Qi Wang, Bang Zhang, and Liefeng Bo

We proposed EMO, an expressive audio-driven portrait-video generation framework. Input a single reference image and the vocal audio, e.g. talking and singing, our method can generate vocal avatar videos with expressive facial expressions, and various head poses, meanwhile, we can generate videos with any duration depending on the length of input video.


Adobe previews new cutting-edge generative AI tools for crafting and editing custom audio — from blog.adobe.com by the Adobe Research Team

New experimental work from Adobe Research is set to change how people create and edit custom audio and music. An early-stage generative AI music generation and editing tool, Project Music GenAI Control allows creators to generate music from text prompts, and then have fine-grained control to edit that audio for their precise needs.

“With Project Music GenAI Control, generative AI becomes your co-creator. It helps people craft music for their projects, whether they’re broadcasters, or podcasters, or anyone else who needs audio that’s just the right mood, tone, and length,” says Nicholas Bryan, Senior Research Scientist at Adobe Research and one of the creators of the technologies.


How AI copyright lawsuits could make the whole industry go extinct — from theverge.com by Nilay Patel
The New York Times’ lawsuit against OpenAI is part of a broader, industry-shaking copyright challenge that could define the future of AI.

There’s a lot going on in the world of generative AI, but maybe the biggest is the increasing number of copyright lawsuits being filed against AI companies like OpenAI and Stability AI. So for this episode, we brought on Verge features editor Sarah Jeong, who’s a former lawyer just like me, and we’re going to talk about those cases and the main defense the AI companies are relying on in those copyright cases: an idea called fair use.


FCC officially declares AI-voiced robocalls illegal — from techcrunch.com by Devom Coldewey

The FCC’s war on robocalls has gained a new weapon in its arsenal with the declaration of AI-generated voices as “artificial” and therefore definitely against the law when used in automated calling scams. It may not stop the flood of fake Joe Bidens that will almost certainly trouble our phones this election season, but it won’t hurt, either.

The new rule, contemplated for months and telegraphed last week, isn’t actually a new rule — the FCC can’t just invent them with no due process. Robocalls are just a new term for something largely already prohibited under the Telephone Consumer Protection Act: artificial and pre-recorded messages being sent out willy-nilly to every number in the phone book (something that still existed when they drafted the law).


EIEIO…Chips Ahoy! — from dashmedia.co by Michael Moe, Brent Peus, and Owen Ritz


Here Come the AI Worms — from wired.com by Matt Burgess
Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.

 

From DSC:
Given this need…

We need to take more of the research from learning science and apply it in our learning spaces.
…I’m highlighting the following resources:


How Learning Happens  — from edutopia.org
In this series, we explore how educators can guide all students, regardless of their developmental starting points, to become productive and engaged learners.

These techniques have resonated with educators everywhere: They are focused on taking advantage of the incredible opportunity to help children reach their full potential by creating positive relationships, experiences, and environments in which every student can thrive. In fact, the science is beginning to hint at even more dramatic outcomes. Practices explicitly designed to integrate social, emotional, and cognitive skills in the classroom, the research suggests, can reverse the damages wrought by childhood trauma and stress—while serving the needs of all students and moving them onto a positive developmental and academic path.


Also from edutopia.org recently, see:

How to Introduce Journaling to Young Children — from edutopia.org by Connie Morris
Students in preschool through second grade can benefit from drawing or writing to explore their thoughts and feelings.

The symbiotic relationship between reading and writing can help our youngest students grow their emergent literacy skills. The idea of teaching writing at an early age can seem daunting. However, meeting children where they are developmentally can make a journaling activity become a magical experience—and they don’t have to write words but can convey thoughts in pictures.

7 Digital Tools That Help Bring History to Life — from edutopia.org by Daniel Leonard
Challenging games, fun projects, and a healthy dose of AI tools round out our top picks for breathing new life into history lessons.

We’ve compiled a list of seven teacher-tested tools, and we lay out how educators are using them both to enhance their lessons and to bring history closer to the present than ever.

Integrating Technology Into Collaborative Professional Learning — from edutopia.org by Roxi Thompson
Incorporating digital collaboration into PD gives teachers a model to replicate when setting up tech activities for students.

 

Google hopes that this personalized AI -- called Notebook LM -- will help people with their note-taking, thinking, brainstorming, learning, and creating.

Google NotebookLM (experiment)

From DSC:
Google hopes that this personalized AI/app will help people with their note-taking, thinking, brainstorming, learning, and creating.

It reminds me of what Derek Bruff was just saying in regards to Top Hat’s Ace product being able to work with a much narrower set of information — i.e., a course — and to be almost like a personal learning assistant for the course you are taking. (As Derek mentions, this depends upon how extensively one uses the CMS/LMS in the first place.)

 
© 2024 | Daniel Christian