Be My Eyes AI offers GPT-4-powered support for blind Microsoft customers — from theverge.com by Sheena Vasani
The tech giant’s using Be My Eyes’ visual assistant tool to help blind users quickly resolve issues without a human agent.


From DSC:
Speaking of Microsoft and AI:

 

9 Tips for Using AI for Learning (and Fun!) — from edutopia.org by Daniel Leonard; via Donna Norton on X/Twitter
These innovative, AI-driven activities will help you engage students across grade levels and subject areas.

Here are nine AI-based lesson ideas to try across different grade levels and subject areas.

ELEMENTARY SCHOOL

AI-generated Animated Drawing of artwork

Courtesy of Meta AI Research
A child’s drawing (left) and animations created with Animated Drawings.

.

1. Bring Student Drawings to Life: Young kids love to sketch, and AI can animate their sketches—and introduce them to the power of the technology in the process.

HIGH SCHOOL

8. Speak With AI in a Foreign Language: When learning a new language, students might feel self-conscious about making mistakes and avoid practicing as much as they should.


Though not necessarily about education, also see:

How I Use AI for Productivity — from wondertools.substack.com by Jeremy Caplan
In this Wonder Tools audio post I share a dozen of my favorite AI tools

From DSC:
I like Jeremy’s mentioning the various tools that he used in making this audio post:

 

The Beatles’ final song is now streaming thanks to AI — from theverge.com by Chris Welch
Machine learning helped Paul McCartney and Ringo Starr turn an old John Lennon demo into what’s likely the band’s last collaborative effort.


Scientists excited by AI tool that grades severity of rare cancer — from bbc.com by Fergus Walsh

Artificial intelligence is nearly twice as good at grading the aggressiveness of a rare form of cancer from scans as the current method, a study suggests.

By recognising details invisible to the naked eye, AI was 82% accurate, compared with 44% for lab analysis.

Researchers from the Royal Marsden Hospital and Institute of Cancer Research say it could improve treatment and benefit thousands every year.

They are also excited by its potential for spotting other cancers early.


Microsoft unveils ‘LeMa’: A revolutionary AI learning method mirroring human problem solving — from venturebeat.com by Michael Nuñez

Researchers from Microsoft Research Asia, Peking University, and Xi’an Jiaotong University have developed a new technique to improve large language models’ (LLMs) ability to solve math problems by having them learn from their mistakes, akin to how humans learn.

The researchers have revealed a pioneering strategy, Learning from Mistakes (LeMa), which trains AI to correct its own mistakes, leading to enhanced reasoning abilities, according to a research paper published this week.

Also from Michael Nuñez at venturebeat.com, see:


GPTs for all, AzeemBot; conspiracy theorist AI; big tech vs. academia; reviving organs ++448 — from exponentialviewco by Azeem Azhar and Chantal Smith


Personalized A.I. Agents Are Here. Is the World Ready for Them? — from ytimes.com by Kevin Roose (behind a paywall)

You could think of the recent history of A.I. chatbots as having two distinct phases.

The first, which kicked off last year with the release of ChatGPT and continues to this day, consists mainly of chatbots capable of talking about things. Greek mythology, vegan recipes, Python scripts — you name the topic and ChatGPT and its ilk can generate some convincing (if occasionally generic or inaccurate) text about it.

That ability is impressive, and frequently useful, but it is really just a prelude to the second phase: artificial intelligence that can actually do things. Very soon, tech companies tell us, A.I. “agents” will be able to send emails and schedule meetings for us, book restaurant reservations and plane tickets, and handle complex tasks like “negotiate a raise with my boss” or “buy Christmas presents for all my family members.”


From DSC:
Very cool!


Nvidia Stock Jumps After Unveiling of Next Major AI Chip. It’s Bad News for Rivals. — from barrons.com

On Monday, Nvidia (ticker: NVDA) announced its new H200 Tensor Core GPU. The chip incorporates 141 gigabytes of memory and offers up to 60% to 90% performance improvements versus its current H100 model when used for inference, or generating answers from popular AI models.

From DSC:
The exponential curve seems to be continuing — 60% to 90% performance improvements is a huge boost in performance.

Also relevant/see:


The 5 Best GPTs for Work — from the AI Exchange

Custom GPTs are exploding, and we wanted to highlight our top 5 that we’ve seen so far:

 

Learning Corporate Learning — Newsletter #70 — from transcend.substack.com by Alberto Arenaza and Michael Narea
A deep-dive into the corporate learning-edtech market for startups

The Transcend Newsletter explores the intersection of the future of education and the future [of] work, and the founders building it around the world.

 

Lastly, we look at four product categories within L&D:

  • Content: libraries of learning content covering a wide range of topics (Coursera & Udemy for Business, Pluralsight, Skillsoft). Live classes are increasingly a part of this category, like Electives, Section or NewCampus.
  • Upskilling: programs focused on learning new skills (upskilling) or relocation of talent within the company (reskilling), both being more intensive than just content (Multiverse, Guild).
  • Coaching: support from coaches, mentors or even peers for employees’ learning (BetterUp, CoachHub, Torch).
  • Simulations: a new wave of scalable learning experiences that creates practice scenarios for employees (Strivr, SimSkills)
 

From DSC:
The following item from The Washington Post made me ask, “Do we give students any/enough training on email etiquette? On effective ways to use LinkedIn, Twitter/X, messaging, other?”


You’re emailing wrong at work. Follow this etiquette guide. — from washingtonpost.com by Danielle Abril
Get the most out of your work email and avoid being a jerk with these etiquette tips for the modern workplace

Most situations depend on the workplace culture. Still, there are some basic rules. Three email and business experts gave us tips for good email etiquette so you can avoid being the jerk at work.

  • Consider not sending an email
  • Keep it short and clear
  • Make it easy to read
  • Don’t blow up the inbox
  • …and more

From DSC:
I would add to use bolding, color, italics, etc. to highlight and help structure the email’s key points and sections.


 

What happens to teaching after Covid? — from chronicle.com by Beth McMurtrie

It’s an era many instructors would like to put behind them: black boxes on Zoom screens, muffled discussions behind masks, students struggling to stay engaged. But how much more challenging would teaching during the pandemic have been if colleges did not have experts on staff to help with the transition? On many campuses, teaching-center directors, instructional designers, educational technologists, and others worked alongside professors to explore learning-management systems, master video technology, and rethink what and how they teach.

A new book out this month, Higher Education Beyond Covid: New Teaching Paradigms and Promise, explores this period through the stories of campus teaching and learning centers. Their experiences reflect successes and failures, and what higher education could learn as it plans for the future.

Beth also mentioned/link to:


How to hold difficult discussions online — from chronicle.com by Beckie Supiano

As usual, our readers were full of suggestions. Kathryn Schild, the lead instructional designer in faculty development and instructional support at the University of Alaska at Anchorage, shared a guide she’s compiled on holding asynchronous discussions, which includes a section on difficult topics.

In an email, Schild also pulled out a few ideas she thought were particularly relevant to Le’s question, including:

  • Set the ground rules as a class. One way to do this is to share your draft rules in a collaborative document and ask students to annotate it and add suggestions.
  • Plan to hold fewer difficult discussions than in a face-to-face class, and work on quality over quantity. This could include multiweek discussions, where you spiral through the same issue with fresh perspectives as the class learns new approaches.
  • Start with relationship-building interactions in the first few weeks, such as introductions, low-stakes group assignments, or peer feedback, etc.
 

New models and developer products announced at DevDay — from openai.com
GPT-4 Turbo with 128K context and lower prices, the new Assistants API, GPT-4 Turbo with Vision, DALL·E 3 API, and more.

Today, we shared dozens of new additions and improvements, and reduced pricing across many parts of our platform. These include:

  • New GPT-4 Turbo model that is more capable, cheaper and supports a 128K context window
  • New Assistants API that makes it easier for developers to build their own assistive AI apps that have goals and can call models and tools
  • New multimodal capabilities in the platform, including vision, image creation (DALL·E 3), and text-to-speech (TTS)


Introducing GPTs — from openai.com
You can now create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills.




OpenAI’s New Groundbreaking Update — from newsletter.thedailybite.co
Everything you need to know about OpenAI’s update, what people are building, and a prompt to skim long YouTube videos…

But among all this exciting news, the announcement of user-created “GPTs” took the cake.

That’s right, your very own personalized version of ChatGPT is coming, and it’s as groundbreaking as it sounds.

OpenAI’s groundbreaking announcement isn’t just a new feature – it’s a personal AI revolution. 

The upcoming customizable “GPTs” transform ChatGPT from a one-size-fits-all to a one-of-a-kind digital sidekick that is attuned to your life’s rhythm. 


Lore Issue #56: Biggest Week in AI This Year — from news.lore.com by Nathan Lands

First, Elon Musk announced “Grok,” a ChatGPT competitor inspired by “The Hitchhiker’s Guide to the Galaxy.” Surprisingly, in just a few months, xAI has managed to surpass the capabilities of GPT-3.5, signaling their impressive speed of execution and establishing them as a formidable long-term contender.

Then, OpenAI hosted their inaugural Dev Day, unveiling “GPT-4 Turbo,” which boasts a 128k context window, API costs slashed by threefold, text-to-speech capabilities, auto-model switching, agents, and even their version of an app store slated for launch next month.


The Day That Changed Everything — from joinsuperhuman.ai by Zain Kahn
ALSO: Everything you need to know about yesterday’s OpenAI announcements

  • OpenAI DevDay Part I: Custom ChatGPTs and the App Store of AI
  • OpenAI DevDay Part II: GPT-4 Turbo, Assistants, APIs, and more

OpenAI’s Big Reveal: Custom GPTs, GPT Store & More — from  news.theaiexchange.com
What you should know about the new announcements; how to get started with building custom GPTs


Incredible pace of OpenAI — from theaivalley.com by Barsee
PLUS: Elon’s Gork


 

 

Shocking AI Statistics in 2023 — from techthatmatters.beehiiv.com by Harsh Makadia

  1. Chat GPT reached 100 million users faster than any other app. By February 2023, the chat.openai.com website saw an average of 25 million daily visitors. How can this rise in AI usage benefit your business’s function?
  2. 45% of executives say the popularity of ChatGPT has led them to increase investment in AI. If executives are investing in AI personally, then how will their beliefs affect corporate investment in AI to drive automation further? Also, how will this affect the amount of workers hired to manage AI systems within companies?
  3. eMarketer predicts that in 2024 at least 20% of Americans will use ChatGPT monthly and that a fifth of them are 25-34 year olds in the workforce. Does this mean that there are more young workers using AI?
  4. …plus 10 more stats

People are speaking with ChatGPT for hours, bringing 2013’s Her closer to reality — from arstechnica.com by Benj Edwards
Long mobile conversations with the AI assistant using AirPods echo the sci-fi film.

It turns out that Willison’s experience is far from unique. Others have been spending hours talking to ChatGPT using its voice recognition and voice synthesis features, sometimes through car connections. The realistic nature of the voice interaction feels largely effortless, but it’s not flawless. Sometimes, it has trouble in noisy environments, and there can be a pause between statements. But the way the ChatGPT voices simulate vocal ticks and noises feels very human. “I’ve been using the voice function since yesterday and noticed that it makes breathing sounds when it speaks,” said one Reddit user. “It takes a deep breath before starting a sentence. And today, actually a minute ago, it coughed between words while answering my questions.”

From DSC:
Hmmmmmmm….I’m not liking the sound of this on my initial take of it. But perhaps there are some real positives to this. I need to keep an open mind.


Working with AI: Two paths to prompting — from oneusefulthing.org by Ethan Mollick
Don’t overcomplicate things

  1. Conversational Prompting [From DSC: i.e., keep it simple]
  2. Structured Prompting

For most people, [Conversational Prompting] is good enough to get started, and it is the technique I use most of the time when working with AI. Don’t overcomplicate things, just interact with the system and see what happens. After you have some experience, however, you may decide that you want to create prompts you can share with others, prompts that incorporate your expertise. We call this approach Structured Prompting, and, while improving AIs may make it irrelevant soon, it is currently a useful tool for helping others by encoding your knowledge into a prompt that anyone can use.


These fake images reveal how AI amplifies our worst stereotypes — from washingtonpost.com by Nitasha Tiku, Kevin Schaul, and Szu Yu Chen (behind paywall)
AI image generators like Stable Diffusion and DALL-E amplify bias in gender and race, despite efforts to detoxify the data fueling these results.

Artificial intelligence image tools have a tendency to spin up disturbing clichés: Asian women are hypersexual. Africans are primitive. Europeans are worldly. Leaders are men. Prisoners are Black.

These stereotypes don’t reflect the real world; they stem from the data that trains the technology. Grabbed from the internet, these troves can be toxic — rife with pornography, misogyny, violence and bigotry.

Abeba Birhane, senior advisor for AI accountability at the Mozilla Foundation, contends that the tools can be improved if companies work hard to improve the data — an outcome she considers unlikely. In the meantime, the impact of these stereotypes will fall most heavily on the same communities harmed during the social media era, she said, adding: “People at the margins of society are continually excluded.”


ChatGPT app revenue shows no signs of slowing, but some other AI apps top it — from techcrunch.com by Sarah Perez; Via AI Valley – Barsee

ChatGPT, the AI-powered chatbot from OpenAI, far outpaces all other AI chatbot apps on mobile devices in terms of downloads and is a market leader by revenue, as well. However, it’s surprisingly not the top AI app by revenue — several photo AI apps and even other AI chatbots are actually making more money than ChatGPT, despite the latter having become a household name for an AI chat experience.


ChatGPT can now analyze files you upload to it without a plugin — from bgr.com by Joshua Hawkins; via Superhuman

According to new reports, OpenAI has begun rolling out a more streamlined approach to how people use ChatGPT. The new system will allow the AI to choose a model automatically, letting you run Python code, open a web browser, or generate images with DALL-E without extra interaction. Additionally, ChatGPT will now let you upload and analyze files.

 


LEGALTECH TOOLS EVERYONE SHOULD KNOW ABOUT — from techdayhq.com

Enter legaltech: a field that marries the power of technology with the complexities of the law. From automating tedious tasks to enhancing research effectiveness, let’s delve into the world of legaltech and unmask the crucial tools everyone should know.



Should AI and Humans be Treated the Same Under the Law–Under a “Reasonable Robot” Standard? (Ryan Abbott – UCLA);  Technically Legal – A Legal Technology and Innovation Podcast

If a human uses artificial intelligence to invent something, should the invention be patentable?

If a driverless car injures a pedestrian, should the AI driver be held to a negligence standard as humans would? Or should courts apply the strict liability used for product defects?

What if AI steals money from a bank account? Should it be held to the same standard as a human under criminal law?

All interesting questions and the subject of a book called the Reasonable Robot by this episode’s guest Ryan Abbott.


Colin Levy, Dorna Moini, and Ashley Carlisle on Herding Cats and Heralding Change: The Inside Scoop on the “Handbook of Legal Tech” — from geeklawblog.com by Greg Lambert & Marlene Gebauer

The guests offered advice to law students and lawyers looking to learn about and leverage legal tech. Carlisle emphasized starting with an open mind, intentional research, and reading widely from legal tech thought leaders. Moini recommended thinking big but starting small with iterative implementation. Levy stressed knowing your purpose and motivations to stay focused amidst the vast array of options.

Lambert prompted the guests to identify low-hanging fruit legal technologies those new to practice should focus on. Levy pointed to document automation and AI. Moini noted that intake and forms digitization can be a first step for laggards. Carlisle advised starting small with discrete tasks before tackling advanced tools.

For their forward-looking predictions, Carlisle saw AI hype fading but increasing tech literacy, Levy predicted growing focus on use and analysis of data as AI advances, and Moini forecasted a rise in online legal service delivery. The guests are excited about spreading awareness through the book to help transform the legal industry.


You’ll never be solo again — from jordanfurlong.substack.com by Jordan Furlong
Generative AI can be the partner, the assistant, the mentor, and the confidant that many sole practitioners and new lawyers never had. There’s just one small drawback…

In terms of legal support, a terrific illustration of Gen AI’s potential is provided by Deborah Merritt in a three-part blog series this month at Law School Cafe. Deborah explores the use of ChatGPT-4 as an aid to bar exam preparation and the first months of law practice, finding it to be astonishingly proficient at identifying legal issues, recommending tactical responses, and showing how to build relationships of trust with clients. It’s not perfect — it makes small errors and omissions that require an experienced lawyer’s review — but it’s still pretty mind-blowingly amazing that a free online technology can do any of this stuff at all. And as is always the case with Gen AI, it’s only going to get better.

In terms of administrative support, Mark Haddad of Thomson Reuters explains in Attorney At Work how AI-driven chatbots and CRM systems can handle a sole practitioner’s initial client queries, schedule appointments and send reminders, while AI can also analyze the firm’s practice areas and create marketing campaigns and content. Earlier this month, Clio itself announced plans for “Clio Duo,” a built-in proprietary Gen AI that “will serve as a coach, intuitive collaborator, and expert consultant to legal professionals, deeply attuned to the intricate facets of running a law firm.”



GPT-4 Beats the Bar Exam — from lawschoolcafe.org by Deborah J. Merritt

In the first three posts in this series, I used a bar exam question as an example of the type of problem a new lawyer might confront in practice. I then explored how GPT-4 might help a new lawyer address that practice problem. In this post, I’ll work with another sample question that NCBE has released for the NextGen bar exam. On this question, GPT-4 beats the bar exam. In other words, a new lawyer using GPT-4 would obtain better answers than one who remembered material studied for the bar exam.


ABA TECHSHOW 2024 – A Preview from the Co-Chairs — from legaltalknetwork.com by Cynthia Thomas, Sofia Lingos, Sharon D. Nelson, and Jim Calloway

Also see: The ABA TECHSHOW 2024


When It Comes to Legal Innovation Everything is Connected — from artificiallawyer.com by Richard Tromans

Legal tech can sometimes feel like it’s the whole world. We get absorbed by the details of the technology and are sometimes blinded by big investment announcements, but without the rest of the legal innovation ecosystem around it then this sector-specific software is limited. What do I mean? Let me explain.


The Most Significant Updates In The Case Management Sphere — from abovethelaw.com by Jared Correia
Joshua Lenon of Clio and Christopher Lafferty of Caret talk over case management software’s role in today’s law firm operations.

 

Nearly half of CEOs believe that AI not only could—but should—replace their own jobs — from finance.yahoo.com by Orianna Rosa Royle; via Harsh Makadia

Researchers from edX, an education platform for upskilling workers, conducted a survey involving over 1,500 executives and knowledge workers. The findings revealed that nearly half of CEOs believe AI could potentially replace “most” or even all aspects of their own positions.

What’s even more intriguing is that 47% of the surveyed executives not only see the possibility of AI taking over their roles but also view it as a desirable development.

Why? Because they anticipate that AI could rekindle the need for traditional leadership for those who remain.

“Success in the CEO role hinges on effective leadership, and AI can liberate time for this crucial aspect of their role,” Andy Morgan, Head of edX for Business comments on the findings.

“CEOs understand that time saved on routine tasks can stimulate innovation, nurture creativity, and facilitate essential upskilling for their teams, fostering both individual and organizational success,” he adds.

But CEOs already know this: EdX’s research echoed that 79% of executives fear that if they don’t learn how to use AI, they’ll be unprepared for the future of work.

From DSC:
By the way, my first knee-jerk reaction to this was:

WHAT?!?!?!? And this from people who earn WAAAAY more than the average employee, no doubt.

After a chance to calm down a bit, I see that the article does say that CEOs aren’t going anywhere. Ah…ok…got it.


Strange Ways AI Disrupts Business Models, What’s Next For Creativity & Marketing, Some Provocative Data — from .implications.com by Scott Belsky
In this edition, we explore some of the more peculiar ways that AI may change business models as well as recent releases for the world of creativity and marketing.

Time-based business models are liable for disruption via a value-based overhaul of compensation. Today, as most designers, lawyers, and many trades in between continue to charge by the hour, the AL-powered step-function improvements in workflows are liable to shake things up.

In such a world, time-based billing simply won’t work anymore unless the value derived from these services is also compressed by a multiple (unlikely). The classic time-based model of billing for lawyers, designers, consultants, freelancers etc is officially antiquated. So, how might the value be captured in a future where we no longer bill by the hour? …

The worlds of creativity and marketing are rapidly changing – and rapidly coming together

#AI #businessmodels #lawyers #billablehour

It becomes clear that just prompting to get images is a rather elementary use case of AI, compared to the ability to place and move objects, change perspective, adjust lighting, and many other actions using AI.



AlphaFold DB provides open access to over 200 million protein structure predictions to accelerate scientific research. — from

AlphaFold is an AI system developed by DeepMind that predicts a protein’s 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiment.


After 25 years of growth for the $68 billion SEO industry, here’s how Google and other tech firms could render it extinct with AI — from fortune.com by Ravi Sen and The Conversation

But one other consequence is that I believe it may destroy the $68 billion search engine optimization industry that companies like Google helped create.

For the past 25 years or so, websites, news outlets, blogs and many others with a URL that wanted to get attention have used search engine optimization, or SEO, to “convince” search engines to share their content as high as possible in the results they provide to readers. This has helped drive traffic to their sites and has also spawned an industry of consultants and marketers who advise on how best to do that.

As an associate professor of information and operations management, I study the economics of e-commerce. I believe the growing use of generative AI will likely make all of that obsolete.


ChatGPT Plus members can upload and analyze files in the latest beta — from theverge.com by Wes Davis
ChatGPT Plus members can also use modes like Browse with Bing without manually switching, letting the chatbot decide when to use them.

OpenAI is rolling out new beta features for ChatGPT Plus members right now. Subscribers have reported that the update includes the ability to upload files and work with them, as well as multimodal support. Basically, users won’t have to select modes like Browse with Bing from the GPT-4 dropdown — it will instead guess what they want based on context.


Google agrees to invest up to $2 billion in OpenAI rival Anthropic — from reuters.com by Krystal Hu

Oct 27 (Reuters) – Alphabet’s (GOOGL.O) Google has agreed to invest up to $2 billion in the artificial intelligence company Anthropic, a spokesperson for the startup said on Friday.

The company has invested $500 million upfront into the OpenAI rival and agreed to add $1.5 billion more over time, the spokesperson said.

Google is already an investor in Anthropic, and the fresh investment would underscore a ramp-up in its efforts to better compete with Microsoft (MSFT.O), a major backer of ChatGPT creator OpenAI, as Big Tech companies race to infuse AI into their applications.


 

 


Teaching writing in the age of AI — from the Future of Learning (a Hechinger Report newsletter) by Javeria Salman

ChatGPT can produce a perfectly serviceable writing “product,” she said. But writing isn’t a product per se — it’s a tool for thinking, for organizing ideas, she said.

“ChatGPT and other text-based tools can’t think for us,” she said. “There’s still things to learn when it comes to writing because writing is a form of figuring out what you think.”

When students could contrast their own writing to ChatGPT’s more generic version, Levine said, they were able to “understand what their own voice is and what it does.”




Grammarly’s new generative AI feature learns your style — and applies it to any text — from techcrunch.com by Kyle Wiggers; via Tom Barrett

But what about text? Should — and if so, how should — writers be recognized and remunerated for AI-generated works that mimic their voices?

Those are questions that are likely to be raised by a feature in Grammarly, the cloud-based typing assistant, that’s scheduled to launch by the end of the year for subscribers to Grammarly’s business tier. Called “Personalized voice detection and application,” the feature automatically detects a person’s unique writing style and creates a “voice profile” that can rewrite any text in the person’s style.


Is AI Quietly Weaving the Fabric of a Global Classroom Renaissance? — from medium.com by Robert the Robot
In a world constantly buzzing with innovation, a silent revolution is unfolding within the sanctuaries of learning—our classrooms.

From bustling metropolises to serene hamlets, schools across the globe are greeting a new companion—Artificial Intelligence (AI). This companion promises to redefine the essence of education, making learning a journey tailored to each child’s unique abilities.

The advent of AI in education is akin to a gentle breeze, subtly transforming the academic landscape. Picture a classroom where each child, with their distinct capabilities and pace, embarks on a personalized learning path. AI morphs this vision into reality, crafting a personalized educational landscape that celebrates the unique potential harbored within every learner.


AI Books for Educators — from aiadvisoryboards.wordpress.com by Barbara Anna Zielonka

Books have always held a special place in my heart. As an avid reader and AI enthusiast, I have curated a list of books on artificial intelligence specifically tailored for educators. These books delve into the realms of AI, exploring its applications, ethical considerations, and its impact on education. Share your suggestions and let me know which books you would like to see included on this list.


SAIL: ELAI recordings, AI Safety, Near term AI/learning — by George Siemens

We held our fourth online Empowering Learners for the Age of AI conference last week. We sold out at 1500 people (a Whova and budget limit). The recordings/playlist from the conference can now be accessed here.

 

60+ Ideas for ChatGPT Assignments — from stars.library.ucf.edu by Kevin Yee, Kirby Whittington, Erin Doggette, and Laurie Uttich

60+ ideas for using ChatGPT in your assignments today


Artificial intelligence is disrupting higher education — from itweb.co.za by Rennie Naidoo; via GSV
Traditional contact universities need to adapt faster and find creative ways of exploring and exploiting AI, or lose their dominant position.

Higher education professionals have a responsibility to shape AI as a force for good.


Introducing Canva’s biggest education launch — from canva.com
We’re thrilled to unveil our biggest education product launch ever. Today, we’re introducing a whole new suite of products that turn Canva into the all-in-one classroom tool educators have been waiting for.

Also see Canva for Education.
Create and personalize lesson plans, infographics,
posters, video, and more. 
100% free for
teachers and students at eligible schools.


ChatGPT and generative AI: 25 applications to support student engagement — from timeshighereducation.com by Seb Dianati and Suman Laudari
In the fourth part of their series looking at 100 ways to use ChatGPT in higher education, Seb Dianati and Suman Laudari share 25 prompts for the AI tool to boost student engagement


There are two ways to use ChatGPT — from theneurondaily.com

  1. Type to it.
  2. Talk to it (new).


Since then, we’ve looked to it for a variety of real-world business advice. For example, Prof Ethan Mollick posted a great guide using ChatGPT-4 with voice as a negotiation instructor.

In a similar fashion, you can consult ChatGPT with voice for feedback on:

  • Job interviews.
  • Team meetings.
  • Business presentations.



Via The Rundown: Google is using AI to analyze the company’s Maps data and suggest adjustments to traffic light timing — aiming to cut driver waits, stops, and emissions.


Google Pixel’s face-altering photo tool sparks AI manipulation debate — from bbc.com by Darren Waters

The camera never lies. Except, of course, it does – and seemingly more often with each passing day.
In the age of the smartphone, digital edits on the fly to improve photos have become commonplace, from boosting colours to tweaking light levels.

Now, a new breed of smartphone tools powered by artificial intelligence (AI) are adding to the debate about what it means to photograph reality.

Google’s latest smartphones released last week, the Pixel 8 and Pixel 8 Pro, go a step further than devices from other companies. They are using AI to help alter people’s expressions in photographs.



From Digital Native to AI-Empowered: Learning in the Age of Artificial Intelligence — from campustechnology.com by Kim Round
The upcoming generation of learners will enter higher education empowered by AI. How can institutions best serve these learners and prepare them for the workplace of the future?

Dr. Chris Dede, of Harvard University and Co-PI of the National AI Institute for Adult Learning and Online Education, spoke about the differences between knowledge and wisdom in AI-human interactions in a keynote address at the 2022 Empowering Learners for the Age of AI conference. He drew a parallel between Star Trek: The Next Generation characters Data and Picard during complex problem-solving: While Data offers the knowledge and information, Captain Picard offers the wisdom and context from on a leadership mantle, and determines its relevance, timing, and application.


The Near-term Impact of Generative AI on Education, in One Sentence — from opencontent.org by David Wiley

This “decreasing obstacles” framing turned out to be helpful in thinking about generative AI. When the time came, my answer to the panel question, “how would you summarize the impact generative AI is going to have on education?” was this:

“Generative AI greatly reduces the degree to which access to expertise is an obstacle to education.”

We haven’t even started to unpack the implications of this notion yet, but hopefully just naming it will give the conversation focus, give people something to disagree with, and help the conversation progress more quickly.


How to Make an AI-Generated Film — from heatherbcooper.substack.com by Heather Cooper
Plus, Midjourney finally has a new upscale tool!


Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning — from blogs.nvidia.com by Angie Lee
AI agent uses LLMs to automatically generate reward algorithms to train robots to accomplish complex tasks.

From DSC:
I’m not excited about this, as I can’t help but wonder…how long before the militaries of the world introduce this into their warfare schemes and strategies?


The 93 Questions Schools Should Ask About AI — from edweek.org by Alyson Klein

The toolkit recommends schools consider:

  • Purpose: How can AI help achieve educational goals?
  • Compliance: How does AI fit with existing policies?
  • Knowledge: How can schools advance AI Literacy?
  • Balance: What are the benefits and risks of AI?
  • Integrity: How does AI fit into policies on things like cheating?
  • Agency: How can humans stay in the loop on AI?
  • Evaluation: How can schools regularly assess the impact of AI?
 
 

Adobe revealed 4 new AI heavy hitters — from aivalley.com

  1. Project Stardust
  2. Project Primrose
  3. Project Poseable
  4. Project Dub Dub Dub


Adobe is working on generative AI video manipulation — from theverge.com by Umar Shakir
The company revealed Project Fast Fill, a new way to remove people, add objects, and replace colors in videos using generative AI and text-prompt interactions.

Adobe is showing off a new generative fill feature, Project Fast Fill, that can easily add or remove objects in videos with the power of AI. It’s one of several new, wild, experimental AI features announced today at the company’s MAX conference. Project Fast Fill has the ability to swap in clothing accessories on people in motion or remove tourists from the background of a landscape pan.



 

The Game-Changer: How Legal Technology is Transforming the Legal Sector — from todaysconveyancer.co.uk by Perfect Portal

Rob Lawson, Strategic Sales Manager at Perfect Portal discussed why he thinks legal technology is so important:

“I spent almost 20 years in private practice and was often frustrated at the antiquated technology and processes that were deployed. It is one of the reasons that I love working in legal tech to provide solutions and streamline processes in the modern law firm. One of the major grumbles for practitioners is the amount of admin that they must do to fulfil the needs of their clients. Technology can automate routine tasks, streamline processes, and help manage large volumes of data more effectively. This then allows legal professionals to focus on more strategic aspects of their work. Ultimately this will increase efficiency and productivity.”



Some gen AI vendors say they’ll defend customers from IP lawsuits. Others, not so much. — from techcrunch.com by Kyle Wiggers

A person using generative AI — models that generate text, images, music and more given a prompt — could infringe on someone else’s copyright through no fault of their own. But who’s on the hook for the legal fees and damages if — or rather, when — that happens?

It depends.

In the fast-changing landscape of generative AI, companies monetizing the tech — from startups to big tech companies like Google, Amazon and Microsoft — are approaching IP risks from very different angles.


Clio Goes All Out with Major Product Announcements, Including A Personal Injury Add-On, E-Filing, and (Of Course) Generative AI — from lawnext.com by Bob Ambrogi

At its annual Clio Cloud Conference in Nashville today, the law practice management company Clio introduced an array of major new products and product updates, calling the series of announcements its most expansive product update ever in its 15-year history.


AI will invert the biglaw pyramid — from alexofftherecord.com by Cece Xie

These tasks that GPT can now handle are, coincidentally, common tasks for junior associates. From company and transaction summaries to legal research and drafting memos, analyzing and drafting have long been the purview of bright-eyed, bushy-tailed new law grads.

If we follow the capitalistic impulse of biglaw firms to its logical conclusion, this means that junior associates may soon face obsolescence. Why spend an hour figuring out how to explain an assignment to a first-year associate when you can just ask CoCounsel in five minutes? And the initial output will likely be better than a first-year’s initial work product, too.

Given the immense cost-savings that legal GPT products can confer, I suspect the rise of AI in legal tech will coincide with smaller junior associate classes. Gone are the days of 50+ junior lawyers all working on the same document review or due diligence. Instead, a fraction of those junior lawyers will be hired to oversee and QC the AI’s outputs. Junior associates will edit more than they do currently and manage more than they do right now. Juniors will effectively be more like midlevels from the get-go.


Beyond Law Firms: How Legal Tech’s Real Frontier Lies With SMBs (small and medium-sized businesses) — from forbes.com by Charles Brecque

Data and artificial intelligence are transforming the legal technology space—there’s no doubt about it. A recent Thomson Reuters Institute survey of lawyers showed that a large majority (82%) of respondents believe ChatGPT and generative AI can be readily applied to legal work.

While it’s tempting to think of legal tech as a playground exclusive to law firms, as technology enables employees without legal training to use and create legal frameworks and documentation, I’d like to challenge that narrative. Being the founder of a company that uses AI to manage contracts, the way I see it is the real magic happens when legal tech tools meet the day-to-day challenges of small and medium-sized businesses (known as “SMBs”).

 
© 2024 | Daniel Christian