For college students—and for higher ed itself—AI is a required course — from forbes.com by Jamie Merisotis

Some of the nation’s biggest tech companies have announced efforts to reskill people to avoid job losses caused by artificial intelligence, even as they work to perfect the technology that could eliminate millions of those jobs.

It’s fair to ask, however: What should college students and prospective students, weighing their choices and possible time and financial expenses, think of this?

The news this spring was encouraging for people seeking to reinvent their careers to grab middle-class jobs and a shot at economic security.

 


Addressing Special Education Needs With Custom AI Solutions — from teachthought.com
AI can offer many opportunities to create more inclusive and effective learning experiences for students with diverse learning profiles.

For too long, students with learning disabilities have struggled to navigate a traditional education system that often fails to meet their unique needs. But what if technology could help bridge the gap, offering personalized support and unlocking the full potential of every learner?

Artificial intelligence (AI) is emerging as a powerful ally in special education, offering many opportunities to create more inclusive and effective learning experiences for students with diverse learning profiles.

.


 

.


11 Summer AI Developments Important to Educators — from stefanbauschard.substack.com by Stefan Bauschard
Equity demands that we help students prepare to thrive in an AI-World

*SearchGPT
*Smaller & on-device (phones, glasses) AI models
*AI TAs
*Access barriers decline, equity barriers grow
*Claude Artifacts and Projects
*Agents, and Agent Teams of a million+
*Humanoid robots & self-driving cars
*AI Curricular integration
*Huge video and video-segmentation gains
*Writing Detectors — The final blow
*AI Unemployment, Student AI anxiety, and forward-thinking approaches
*Alternative assessments


Academic Fracking: When Publishers Sell Scholars Work to AI — from aiedusimplified.substack.com by Lance Eaton
Further discussion of publisher practices selling scholars’ work to AI companies

Last week, I explored AI and academic publishing in response to an article that came out a few weeks ago about a deal Taylor & Francis made to sell their books to Microsoft and one other AI company (unnamed) for a boatload of money.

Since then, two more pieces have been widely shared including this piece from Inside Higher Ed by Kathryn Palmer (and to which I was interviewed and mentioned in) and this piece from Chronicle of Higher Ed by Christa Dutton. Both pieces try to cover the different sides talking to authors, scanning the commentary online, finding some experts to consult and talking to the publishers. It’s one of those things that can feel like really important and also probably only to a very small amount of folks that find themselves thinking about academic publishing, scholarly communication, and generative AI.


At the Crossroads of Innovation: Embracing AI to Foster Deep Learning in the College Classroom — from er.educause.edu by Dan Sarofian-Butin
AI is here to stay. How can we, as educators, accept this change and use it to help our students learn?

The Way Forward
So now what?

In one respect, we already have a partial answer. Over the last thirty years, there has been a dramatic shift from a teaching-centered to a learning-centered education model. High-impact practices, such as service learning, undergraduate research, and living-learning communities, are common and embraced because they help students see the real-world connections of what they are learning and make learning personal.11

Therefore, I believe we must double down on a learning-centered model in the age of AI.

The first step is to fully and enthusiastically embrace AI.

The second step is to find the “jagged technological frontier” of using AI in the college classroom.


.

.


.

.


Futures Thinking in Education — from gettingsmart.com by Getting Smart Staff

Key Points

  • Educators should leverage these tools to prepare for rapid changes driven by technology, climate, and social dynamics.
  • Cultivating empathy for future generations can help educators design more impactful and forward-thinking educational practices.
 

The Three Wave Strategy of AI Implementation — from aiczar.blogspot.com by Alexander “Sasha” Sidorkin

The First Wave: Low-Hanging Fruit

These are just examples:

  • Student services
  • Resume and Cover Letter Review (Career Services)Offering individual resume critiques
  • Academic Policy Development and Enforcement (Academic Affairs)…
  • Health Education and Outreach (Health and Wellness Services) …
  • Sustainability Education and Outreach (Sustainability and Environmental Initiatives) …
  • Digital Marketing and Social Media Management (University Communications and Marketing) …
  • Grant Proposal Development and Submission (Research and Innovation) …
  • Financial Aid Counseling (Financial Aid and Scholarships) …
  • Alumni Communications (Alumni Relations and Development) …
  • Scholarly Communications (Library Services) …
  • International Student and Scholar Services (International Programs and Global Engagement)

Duolingo Max: A Paid Subscription to Learn a Language Using ChatGPT AI (Worth It?) — from theaigirl.substack.com by Diana Dovgopol (behind paywall for the most part)
The integration of AI in language learning apps could be game-changing.


Research Insights #12: Copyrights and Academia — from aiedusimplified.substack.com by Lance Eaton
Scholarly authors are not going to be happy…

A while back, I wrote about some of my thoughts on generative AI around the copyright issues. Not much has changed since then, but a new article (Academic authors ‘shocked’ after Taylor & Francis sells access to their research to Microsoft AI) is definitely stirring up all sorts of concerns by academic authors. The basics of that article are that Taylor & Francis sold access to authors’ research to Microsoft for AI development without informing the authors, sparking significant concern among academics and the Society of Authors about transparency, consent, and the implications for authors’ rights and future earnings.

The stir can be seen as both valid and redundant. Two folks’ points stick out to me in this regard.

 

AI Policy 101: a Beginners’ Framework — from drphilippahardman.substack.com by Dr. Philippa Hardman
How to make a case for AI experimentation & testing in learning & development


6 AI Tools Recommended By Teachers That Aren’t ChatGPT — from forbes.com by Dan Fitzpatrick

Here are six AI tools making waves in classrooms worldwide:

  • Brisk Teaching
  • SchoolAI
  • Diffit
  • Curipod
  • Skybox by Blockade Labs in ThingLink
  • Ideogram

With insights from educators who are leveraging their potential, let’s explore them in more detail.


AI Is Speeding Up L&D But Are We Losing the Learning? — from learningguild.com by Danielle Wallace

The role of learning & development
Given these risks, what can L&D professionals do to ensure generative AI contributes to effective learning? The solution lies in embracing the role of trusted learning advisors, guiding the use of AI tools in a way that prioritizes achieving learning outcomes over only speed. Here are three key steps to achieve this:

1. Playtest and Learn About AI
2. Set the Direction for AI to Be Learner-Centered…
3. Become Trusted Learning Advisors…


Some other tools to explore:

Descript: If you can edit text, you can edit videos. — per Bloomberg’s Vlad Savov
Descript is the AI-powered, fully featured, end-to-end video editor that you already know how to use.

A video editor that works like docs and slides
No need to learn a new tool — Descript works like the tools you’ve already learned.

Audeze | Filter — per Bloomberg’s Vlad Savov


AI Chatbots in Schools Findings from a Poll of K-12 Teachers, Students, Parents, and College Undergraduates — from Impact Research; via Michael Spencer and Lily Lee

Key Findings

  • In the last year, AI has become even more intertwined with our education system. More teachers, parents, and students are aware of it and have used it themselves on a regular basis. It is all over our education system today.
  • While negative views of AI have crept up over the last year, students, teachers, and parents feel very positive about it in general. On balance they see positive uses for the technology in school, especially if they have used it themselves.
  • Most K-12 teachers, parents, and students don’t think their school is doing much about AI, despite its widespread use. Most say their school has no policy on it, is doing nothing to offer desired teacher training, and isn’t meeting the demand of students who’d like a career in a job that will need AI.
  • The AI vacuum in school policy means it is currently used “unauthorized,” while instead people want policies that encourage AI. Kids, parents, and teachers are figuring it out on their own/without express permission, whereas all stakeholders would rather have a policy that explicitly encourages AI from a thoughtful foundation.

The Value of AI in Today’s Classrooms — from waltonfamilyfoundation.org

There is much discourse about the rise and prevalence of AI in education and beyond. These debates often lack the perspectives of key stakeholders – parents, students and teachers.

In 2023, the Walton Family Foundation commissioned the first national survey of teacher and student attitudes toward ChatGPT. The findings showed that educators and students embrace innovation and are optimistic that AI can meaningfully support traditional instruction.

A new survey conducted May 7-15, 2024, showed that knowledge of and support for AI in education is growing among parents, students and teachers. More than 80% of each group says it has had a positive impact on education.

 

 

Thinking with Colleagues: AI in Education — from campustechnology.com by Mary Grush
A Q&A with Ellen Wagner

Wagner herself recently relied on the power of collegial conversations to probe the question: What’s on the minds of educators as they make ready for the growing influence of AI in higher education? CT asked her for some takeaways from the process.

We are in the very early days of seeing how AI is going to affect education. Some of us are going to need to stay focused on the basic research to test hypotheses. Others are going to dive into laboratory “sandboxes” to see if we can build some new applications and tools for ourselves. Still others will continue to scan newsletters like ProductHunt every day to see what kinds of things people are working on. It’s going to be hard to keep up, to filter out the noise on our own. That’s one reason why thinking with colleagues is so very important.

Mary and Ellen linked to “What Is Top of Mind for Higher Education Leaders about AI?” — from northcoasteduvisory.com. Below are some excerpts from those notes:

We are interested how K-12 education will change in terms of foundational learning. With in-class, active learning designs, will younger students do a lot more intensive building of foundational writing and critical thinking skills before they get to college?

  1. The Human in the Loop: AI is built using math: think of applied statistics on steroids. Humans will be needed more than ever to manage, review and evaluate the validity and reliability of results. Curation will be essential.
  2. We will need to generate ideas about how to address AI factors such as privacy, equity, bias, copyright, intellectual property, accessibility, and scalability.
  3. Have other institutions experimented with AI detection and/or have held off on emerging tools related to this? We have just recently adjusted guidance and paused some tools related to this given the massive inaccuracies in detection (and related downstream issues in faculty-elevated conduct cases)

Even though we learn repeatedly that innovation has a lot to do with effective project management and a solid message that helps people understand what they can do to implement change, people really need innovation to be more exciting and visionary than that.  This is the place where we all need to help each other stay the course of change. 


Along these lines, also see:


What people ask me most. Also, some answers. — from oneusefulthing.org by Ethan Mollick
A FAQ of sorts

I have been talking to a lot of people about Generative AI, from teachers to business executives to artists to people actually building LLMs. In these conversations, a few key questions and themes keep coming up over and over again. Many of those questions are more informed by viral news articles about AI than about the real thing, so I thought I would try to answer a few of the most common, to the best of my ability.

I can’t blame people for asking because, for whatever reason, the companies actually building and releasing Large Language Models often seem allergic to providing any sort of documentation or tutorial besides technical notes. I was given much better documentation for the generic garden hose I bought on Amazon than for the immensely powerful AI tools being released by the world’s largest companies. So, it is no surprise that rumor has been the way that people learn about AI capabilities.

Currently, there are only really three AIs to consider: (1) OpenAI’s GPT-4 (which you can get access to with a Plus subscription or via Microsoft Bing in creative mode, for free), (2) Google’s Bard (free), or (3) Anthropic’s Claude 2 (free, but paid mode gets you faster access). As of today, GPT-4 is the clear leader, Claude 2 is second best (but can handle longer documents), and Google trails, but that will likely change very soon when Google updates its model, which is rumored to be happening in the near future.

 

Introductory comments from DSC:

Sometimes people and vendors write about AI’s capabilities in such a glowingly positive way. It seems like AI can do everything in the world. And while I appreciate the growing capabilities of Large Language Models (LLMs) and the like, there are some things I don’t want AI-driven apps to do.

For example, I get why AI can be helpful in correcting my misspellings, my grammatical errors, and the like. That said, I don’t want AI to write my emails for me. I want to write my own emails. I want to communicate what I want to communicate. I don’t want to outsource my communication. 

And what if an AI tool summarizes an email series in a way that I miss some key pieces of information? Hmmm…not good.

Ok, enough soapboxing. I’ll continue with some resources.


ChatGPT Enterprise

Introducing ChatGPT Enterprise — from openai.com
Get enterprise-grade security & privacy and the most powerful version of ChatGPT yet.

We’re launching ChatGPT Enterprise, which offers enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, customization options, and much more. We believe AI can assist and elevate every aspect of our working lives and make teams more creative and productive. Today marks another step towards an AI assistant for work that helps with any task, is customized for your organization, and that protects your company data.

Enterprise-grade security & privacy and the most powerful version of ChatGPT yet. — from openai.com


NVIDIA

Nvidia’s Q2 earnings prove it’s the big winner in the generative AI boom — from techcrunch.com by Kirsten Korosec

Nvidia Quarterly Earnings Report Q2 Smashes Expectations At $13.5B — from techbusinessnews.com.au
Nvidia’s quarterly earnings report (Q2) smashed expectations coming in at $13.5B more than doubling prior earnings of $6.7B. The chipmaker also projected October’s total revenue would peak at $16B


MISC

OpenAI Passes $1 Billion Revenue Pace as Big Companies Boost AI Spending — from theinformation.com by Amir Efrati and Aaron Holmes

OpenAI is currently on pace to generate more than $1 billion in revenue over the next 12 months from the sale of artificial intelligence software and the computing capacity that powers it. That’s far ahead of revenue projections the company previously shared with its shareholders, according to a person with direct knowledge of the situation.

OpenAI’s GPTBot blocked by major websites and publishers — from the-decoder.com by Matthias Bastian
An emerging chatbot ecosystem builds on existing web content and could displace traditional websites. At the same time, licensing and financing are largely unresolved.

OpenAI offers publishers and website operators an opt-out if they prefer not to make their content available to chatbots and AI models for free. This can be done by blocking OpenAI’s web crawler “GPTBot” via the robots.txt file. The bot collects content to improve future AI models, according to OpenAI.

Major media companies including the New York Times, CNN, Reuters, Chicago Tribune, ABC, and Australian Community Media (ACM) are now blocking GPTBot. Other web-based content providers such as Amazon, Wikihow, and Quora are also blocking the OpenAI crawler.

Introducing Code Llama, a state-of-the-art large language model for coding  — from ai.meta.com

Takeaways re: Code Llama:

  • Is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts.
  • Is free for research and commercial use.
  • Is built on top of Llama 2 and is available in three models…
  • In our own benchmark testing, Code Llama outperformed state-of-the-art publicly available LLMs on code tasks

Key Highlights of Google Cloud Next ‘23— from analyticsindiamag.com by Shritama Saha
Meta’s Llama 2, Anthropic’s Claude 2, and TII’s Falcon join Model Garden, expanding model variety.

AI finally beats humans at a real-life sport— drone racing — from nature.com by Dan Fox
The new system combines simulation with onboard sensing and computation.

From DSC:
This is scary — not at all comforting to me. Militaries around the world continue their jockeying to be the most dominant, powerful, and effective killers of humankind. That definitely includes the United States and China. But certainly others as well. And below is another alarming item, also pointing out the downsides of how we use technologies.

The Next Wave of Scams Will Be Deepfake Video Calls From Your Boss — from bloomberg.com by Margi Murphy; behind paywall

Cybercriminals are constantly searching for new ways to trick people. One of the more recent additions to their arsenal was voice simulation software.

10 Great Colleges For Studying Artificial Intelligence — from forbes.com by Sim Tumay

The debut of ChatGPT in November created angst for college admission officers and professors worried they would be flooded by student essays written with the undisclosed assistance of artificial intelligence. But the explosion of interest in AI has benefits for higher education, including a new generation of students interested in studying and working in the field. In response, universities are revising their curriculums to educate AI engineers.

 


ElevenLabs’ AI Voice Generator Can Now Fake Your Voice in 30 Languages — from gizmodo.com by Kyle Barr
ElevenLabs said its AI voice generator is out of beta, saying it would support video game and audiobook creators with cheap audio.

According to ElevenLabs, the new Multilingual v2 model promises it can produce “emotionally rich” audio in a total of 30 languages. The company offers two AI voice tools, one is a text-to-speech model and the other is the “VoiceLab” that lets paying users clone a voice by inputting fragments of theirs (or others) speech into the model to create a kind of voice cone. With the v2 model, users can get these generated voices to start speaking in Greek, Malay, or Turkish.

Since then, ElevenLabs claims its integrated new measures to ensure users can only clone their own voice. Users need to verify their speech with a text captcha prompt which is then compared to the original voice sample.

From DSC:
I don’t care what they say regarding safeguards/proof of identity/etc. This technology has been abused and will be abused in the future. We can count on it. The question now is, how do we deal with it?



Google, Amazon, Nvidia and other tech giants invest in AI startup Hugging Face, sending its valuation to $4.5 billion — from cnbc.com by Kif Leswing

But Hugging Face produces a platform where AI developers can share code, models, data sets, and use the company’s developer tools to get open-source artificial intelligence models running more easily. In particular, Hugging Face often hosts weights, or large files with lists of numbers, which are the heart of most modern AI models.

While Hugging Face has developed some models, like BLOOM, its primary product is its website platform, where users can upload models and their weights. It also develops a series of software tools called libraries that allow users to get models working quickly, to clean up large datasets, or to evaluate their performance. It also hosts some AI models in a web interface so end users can experiment with them.


The global semiconductor talent shortage — from www2.deloitte.com
How to solve semiconductor workforce challenges

Numerous skills are required to grow the semiconductor ecosystem over the next decade. Globally, we will need tens of thousands of skilled tradespeople to build new plants to increase and localize manufacturing capacity: electricians, pipefitters, welders; thousands more graduate electrical engineers to design chips and the tools that make the chips; more engineers of various kinds in the fabs themselves, but also operators and technicians. And if we grow the back end in Europe and the Americas, that equates to even more jobs.

Each of these job groups has distinct training and educational needs; however, the number of students in semiconductor-focused programs (for example, undergraduates in semiconductor design and fabrication) has dwindled. Skills are also evolving within these job groups, in part due to automation and increased digitization. Digital skills, such as cloud, AI, and analytics, are needed in design and manufacturing more than ever.

The chip industry has long partnered with universities and engineering schools. Going forward, they also need to work more with local tech schools, vocational schools, and community colleges; and other organizations, such as the National Science Foundation in the United States.


Our principles for partnering with the music industry on AI technology — from blog.youtube (Google) by Neal Mohan, CEO, YouTube
AI is here, and we will embrace it responsibly together with our music partners.

  • Principle #1: AI is here, and we will embrace it responsibly together with our music partners.
  • Principle #2: AI is ushering in a new age of creative expression, but it must include appropriate protections and unlock opportunities for music partners who decide to participate.
  • Principle #3: We’ve built an industry-leading trust and safety organization and content policies. We will scale those to meet the challenges of AI.

Developers are now using AI for text-to-music apps — from techcrunch.com by Ivan Mehta

Brett Bauman, the developer of PlayListAI (previously LinupSupply), launched a new app called Songburst on the App Store this week. The app doesn’t have a steep learning curve. You just have to type in a prompt like “Calming piano music to listen to while studying” or “Funky beats for a podcast intro” to let the app generate a music clip.

If you can’t think of a prompt the app has prompts in different categories, including video, lo-fi, podcast, gaming, meditation and sample.


A Generative AI Primer — from er.educause.edu by Brian Basgen
Understanding the current state of technology requires understanding its origins. This reading list provides sources relevant to the form of generative AI that led to natural language processing (NLP) models such as ChatGPT.


Three big questions about AI and the future of work and learning — from workshift.opencampusmedia.org by Alex Swartsel
AI is set to transform education and work today and well into the future. We need to start asking tough questions right now, writes Alex Swartsel of JFF.

  1. How will AI reshape jobs, and how can we prepare all workers and learners with the skills they’ll need?
  2. How can education and workforce leaders equitably adopt AI platforms to accelerate their impact?
  3. How might we catalyze sustainable policy, practice, and investments in solutions that drive economic opportunity?

“As AI reshapes both the economy and society, we must collectively call for better data, increased accountability, and more flexible support for workers,” Swartsel writes.


The Current State of AI for Educators (August, 2023) — from drphilippahardman.substack.com by Dr. Philippa Hardman
A podcast interview with the University of Toronto on where we’re at & where we’re going.

 

Teaching Assistants that Actually Assist Instructors with Teaching — from opencontent.org by David Wiley

“…what if generative AI could provide every instructor with a genuine teaching assistant – a teaching assistant that actually assisted instructors with their teaching?”

Assignment Makeovers in the AI Age: Reading Response Edition — from derekbruff.org by Derek Bruff

For my cryptography course, Mollick’s first option would probably mean throwing out all my existing reading questions. My intent with these reading questions was noble, that is, to guide students to the big questions and debates in the field, but those are exactly the kinds of questions for which AI can write decent answers. Maybe the AI tools would fare worse in a more advanced course with very specialized readings, but in my intro to cryptography course, they can handle my existing reading questions with ease.

What about option two? I think one version of this would be to do away with the reading response assignment altogether.

4 Steps to Help You Plan for ChatGPT in Your Classroom — from chronicle.com by Flower Darby
Why you should understand how to teach with AI tools — even if you have no plans to actually use them.


Some items re: AI in other areas:

15 Generative AI Tools A billion+ people will be collectively using very soon. I use most of them every day — from stefanbauschard.substack.com by Stefan Bauschard
ChatGPT, Bing, Office Suite, Google Docs, Claude, Perplexity.ai, Plug-Ins, MidJourney, Pi, Runway, Bard, Bing, Synthesia, D-ID

The Future of AI in Video: a look forward — from provideocoalition.com by Iain Anderson

Actors say Hollywood studios want their AI replicas — for free, forever — from theverge.com by Andrew Webster; resource from Tom Barrett

Along these lines of Hollywood and AI, see this Tweet:

Claude 2: ChatGPT rival launches chatbot that can summarise a novel –from theguardian.com by Dan Milmo; resource from Tom Barrett
Anthropic releases chatbot able to process large blocks of text and make judgments on what it is producing

Generative AI imagines new protein structures — from news.mit.edu by Rachel Gordon; resource from Sunday Signal
MIT researchers develop “FrameDiff,” a computational tool that uses generative AI to craft new protein structures, with the aim of accelerating drug development and improving gene therapy.

Google’s medical AI chatbot is already being tested in hospitals — from theverge.com by Wes Davis; resource via GSV

Ready to Sing Elvis Karaoke … as Elvis? The Weird Rise of AI Music — from rollingstone.com by Brian Hiatt; resource from Misha da Vinci
From voice-cloning wars to looming copyright disputes to a potential flood of nonhuman music on streaming, AI is already a musical battleground

 

Introducing Superalignment — from openai.com
We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we’ve secured to date to this effort. We’re looking for excellent ML researchers and engineers to join us.

Excerpts (emphasis DSC):

How do we ensure AI systems much smarter than humans follow human intent?

Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.

Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence.

From DSC:
Hold up. We’ve been told for years that AI is at the toddler stage. But now assertions are being made that AI systems are smarter than humans — much smarter even. That said, then why is the goal of OpenAI to build a roughly human-level automated alignment researcher if humans aren’t that smart after all…? Which is it? I must be missing or misunderstanding something here…

OpenAI are jumping back on the alignment bandwagon with the brilliantly-named Superalignment Team. And you guessed it – they’re researching alignment of future superintelligent AIs. They reckon that AI can align other AI faster than humans can, and the plan is to build an AI that does just that. Head-spinning stuff…

Ben’s Bites

Plus…

Who else should be on this team? We certainly don’t want a team comprised of just technical people. How about including rabbis, pastors, priests, parents, teachers, professors, social workers, judges, legislators, and many others who can help represent other specialties, disciplines, and perspectives to protect society?


Authors file a lawsuit against OpenAI for unlawfully ‘ingesting’ their books — from theguardian.com by Ella Creamer; via Ben’s Bytes
Mona Awad and Paul Tremblay allege that their books, which are copyrighted, were ‘used to train’ ChatGPT because the chatbot generated ‘very accurate summaries’ of the works
.


How AI is Transforming Workplace Architecture and Design — from workdesign.com by Christian Lehmkuhl


London Futurists | Generative AI drug discovery breakthrough, with Alex Zhavoronkov — from londonfuturists.buzzsprout.com

Alex Zhavoronkov is our first guest to make a repeat appearance, having first joined us in episode 12, last November. We are delighted to welcome him back, because he is doing some of the most important work on the planet, and he has some important news.

In 2014, Alex founded Insilico Medicine, a drug discovery company which uses artificial intelligence to identify novel targets and novel molecules for pharmaceutical companies. Insilico now has drugs designed with AI in human clinical trials, and it is one of a number of companies that are demonstrating that developing drugs with AI can cut the time and money involved in the process by as much as 90%.


Watch This Space: New Field of Spatial Finance Uses AI to Estimate Risk, Monitor Assets, Analyze Claims — from blogs.nvidia.com

When making financial decisions, it’s important to look at the big picture — say, one taken from a drone, satellite or AI-powered sensor.

The emerging field of spatial finance harnesses AI insights from remote sensors and aerial imagery to help banks, insurers, investment firms and businesses analyze risks and opportunities, enable new services and products, measure the environmental impact of their holdings, and assess damage after a crisis.


Secretive hardware startup Humane’s first product is the Ai Pin — from techcrunch.com by Kyle Wiggers; via The Rundown AI

Excerpt:

Humane, the startup launched by ex-Apple design and engineering duo Imran Chaudhri and Bethany Bongiorno, today revealed details about its first product: The Humane Ai Pin.

Humane’s product, as it turns out, is a wearable gadget with a projected display and AI-powered features. Chaudhri gave a live demo of the device onstage during a TED Talk in April, but a press release issued today provides a few additional details.

The Humane Ai Pin is a new type of standalone device with a software platform that harnesses the power of AI to enable innovative personal computing experiences.


He Spent $140 Billion on AI With Little to Show. Now He Is Trying Again. — from wsj.com by Eliot Brown; via Superhuman
Billionaire Masayoshi Son said he would make SoftBank ‘the investment company for the AI revolution,’ but he missed out on the most recent frenzy


“Stunning”—Midjourney update wows AI artists with camera-like feature — from arstechnica.com by Benj Edwards; via Sam DeBrule from Machine Learnings
Midjourney v5.2 features camera-like zoom control over framing, more realism.


What is AIaaS? Guide to Artificial Intelligence as a Service — from eweek.com by Shelby Hiter
Artificial intelligence as a service, AIaaS, is an outsourced AI service provided by cloud-based AI providers.

AIaaS Definition
When a company is interested in working with artificial intelligence but doesn’t have the in-house resources, budget, and/or expertise to build and manage its own AI technology, it’s time to invest in AIaaS.

Artificial intelligence as a service, or AIaaS, is an outsourced service model AI that cloud-based companies provide to other businesses, giving them access to different AI models, algorithms, and other resources directly through a cloud computing platform; this access is usually managed through an API or SDK connection.


The Rise of the AI Engineer — from latent.space


Boost ChatGPT with new plugins — from wondertools.substack.com by Jeremy Caplan
Wonder Tools | Six new ways to use AI
.


A series re: AI from Jeff Foster out at ProvideoCoalition.com


The AI upskilling imperative to build a future-ready workforce — from businessinsider.com

Excerpts:

Skill development has always been crucial, but recent technological advancements have raised the stakes. We are currently in the midst of the fourth industrial revolution, where automation and breakthroughs in artificial intelligence (AI) are revolutionising the workplace. In this era of quick change and short half-life of skills, upskilling shouldn’t be an afterthought. Instead, reskilling and upskilling have to evolve into requirements for effective professional development.

To understand the significance of upskilling for your career trajectory, it is important to recognise the ever-evolving nature of technology and the rapid pace of digital transformation. Business Insider India has been exploring how businesses and thought leaders are driving innovation by educating their staff on the technologies and skills that will shape the future.

 

Introducing Teach AI — Empowering educators to teach w/ AI & about AI [ISTE & many others]


Teach AI -- Empowering educators to teach with AI and about AI


Also relevant/see:

 

EdTech Is Going Crazy For AI — from joshbersin.com by Josh Bersin

Excerpts:

This week I spent a few days at the ASU/GSV conference and ran into 7,000 educators, entrepreneurs, and corporate training people who had gone CRAZY for AI.

No, I’m not kidding. This community, which makes up people like training managers, community college leaders, educators, and policymakers is absolutely freaked out about ChatGPT, Large Language Models, and all sorts of issues with AI. Now don’t get me wrong: I’m a huge fan of this. But the frenzy is unprecedented: this is bigger than the excitement at the launch of the i-Phone.

Second, the L&D market is about to get disrupted like never before. I had two interactive sessions with about 200 L&D leaders and I essentially heard the same thing over and over. What is going to happen to our jobs when these Generative AI tools start automatically building content, assessments, teaching guides, rubrics, videos, and simulations in seconds?

The answer is pretty clear: you’re going to get disrupted. I’m not saying that L&D teams need to worry about their careers, but it’s very clear to me they’re going to have to swim upstream in a big hurry. As with all new technologies, it’s time for learning leaders to get to know these tools, understand how they work, and start to experiment with them as fast as you can.


Speaking of the ASU+GSV Summit, see this posting from Michael Moe:

EIEIO…Brave New World
By: Michael Moe, CFA, Brent Peus, Owen Ritz

Excerpt:

Last week, the 14th annual ASU+GSV Summit hosted over 7,000 leaders from 70+ companies well as over 900 of the world’s most innovative EdTech companies. Below are some of our favorite speeches from this year’s Summit…

***

Also see:

Imagining what’s possible in lifelong learning: Six insights from Stanford scholars at ASU+GSV — from acceleratelearning.stanford.edu by Isabel Sacks

Excerpt:

High-quality tutoring is one of the most effective educational interventions we have – but we need both humans and technology for it to work. In a standing-room-only session, GSE Professor Susanna Loeb, a faculty lead at the Stanford Accelerator for Learning, spoke alongside school district superintendents on the value of high-impact tutoring. The most important factors in effective tutoring, she said, are (1) the tutor has data on specific areas where the student needs support, (2) the tutor has high-quality materials and training, and (3) there is a positive, trusting relationship between the tutor and student. New technologies, including AI, can make the first and second elements much easier – but they will never be able to replace human adults in the relational piece, which is crucial to student engagement and motivation.



A guide to prompting AI (for what it is worth) — from oneusefulthing.org by Ethan Mollick
A little bit of magic, but mostly just practice

Excerpt (emphasis DSC):

Being “good at prompting” is a temporary state of affairs. The current AI systems are already very good at figuring out your intent, and they are getting better. Prompting is not going to be that important for that much longer. In fact, it already isn’t in GPT-4 and Bing. If you want to do something with AI, just ask it to help you do the thing. “I want to write a novel, what do you need to know to help me?” will get you surprisingly far.

The best way to use AI systems is not to craft the perfect prompt, but rather to use it interactively. Try asking for something. Then ask the AI to modify or adjust its output. Work with the AI, rather than trying to issue a single command that does everything you want. The more you experiment, the better off you are. Just use the AI a lot, and it will make a big difference – a lesson my class learned as they worked with the AI to create essays.

From DSC:
Agreed –> “Being “good at prompting” is a temporary state of affairs.” The User Interfaces that are/will be appearing will help greatly in this regard.


From DSC:
Bizarre…at least for me in late April of 2023:


Excerpt from Lore Issue #28: Drake, Grimes, and The Future of AI Music — from lore.com

Here’s a summary of what you need to know:

  • The rise of AI-generated music has ignited legal and ethical debates, with record labels invoking copyright law to remove AI-generated songs from platforms like YouTube.
  • Tech companies like Google face a conundrum: should they take down AI-generated content, and if so, on what grounds?
  • Some artists, like Grimes, are embracing the change, proposing new revenue-sharing models and utilizing blockchain-based smart contracts for royalties.
  • The future of AI-generated music presents both challenges and opportunities, with the potential to create new platforms and genres, democratize the industry, and redefine artist compensation.

The Need for AI PD — from techlearning.com by Erik Ofgang
Educators need training on how to effectively incorporate artificial intelligence into their teaching practice, says Lance Key, an award-winning educator.

“School never was fun for me,” he says, hoping that as an educator he could change that with his students. “I wanted to make learning fun.”  This ‘learning should be fun’ philosophy is at the heart of the approach he advises educators take when it comes to AI. 


Coursera Adds ChatGPT-Powered Learning Tools — from campustechnology.com by Kate Lucariello

Excerpt:

At its 11th annual conference in 2023, educational company Coursera announced it is adding ChatGPT-powered interactive ed tech tools to its learning platform, including a generative AI coach for students and an AI course-building tool for teachers. It will also add machine learning-powered translation, expanded VR immersive learning experiences, and more.

Coursera Coach will give learners a ChatGPT virtual coach to answer questions, give feedback, summarize video lectures and other materials, give career advice, and prepare them for job interviews. This feature will be available in the coming months.

From DSC:
Yes…it will be very interesting to see how tools and platforms interact from this time forth. The term “integration” will take a massive step forward, at least in my mind.


 

From DSC:
Before we get to Scott Belsky’s article, here’s an interesting/related item from Tobi Lutke:


Our World Shaken, Not Stirred: Synthetic entertainment, hybrid social experiences, syncing ourselves with apps, and more. — from implications.com by Scott Belsky
Things will get weird. And exciting.

Excerpts:

Recent advances in technology will stir shake the pot of culture and our day-to-day experiences. Examples? A new era of synthetic entertainment will emerge, online social dynamics will become “hybrid experiences” where AI personas are equal players, and we will sync ourselves with applications as opposed to using applications.

A new era of synthetic entertainment will emerge as the world’s video archives – as well as actors’ bodies and voices – will be used to train models. Expect sequels made without actor participation, a new era of ai-outfitted creative economy participants, a deluge of imaginative media that would have been cost prohibitive, and copyright wars and legislation.

Unauthorized sequels, spin-offs, some amazing stuff, and a legal dumpster fire: Now lets shift beyond Hollywood to the fast-growing long tail of prosumer-made entertainment. This is where entirely new genres of entertainment will emerge including the unauthorized sequels and spinoffs that I expect we will start seeing.


Also relevant/see:

Digital storytelling with generative AI: notes on the appearance of #AICinema — from bryanalexander.org by Bryan Alexander

Excerpt:

This is how I viewed a fascinating article about the so-called #AICinema movement.  Benj Edwards describes this nascent current and interviews one of its practitioners, Julie Wieland.  It’s a great example of people creating small stories using tech – in this case, generative AI, specifically the image creator Midjourney.

Bryan links to:

Artists astound with AI-generated film stills from a parallel universe — from arstechnica.com by Benj Edwards
A Q&A with “synthographer” Julie Wieland on the #aicinema movement.

An AI-generated image from an #aicinema still series called Vinyl Vengeance by Julie Wieland, created using Midjourney.


From DSC:
How will text-to-video impact the Learning and Development world? Teaching and learning? Those people communicating within communities of practice? Those creating presentations and/or offering webinars?

Hmmm…should be interesting!


 


Also from Julie Sobowale, see:

  • Law’s AI revolution is here — from nationalmagazine.ca
    At least this much we know. Firms need to develop a strategy around language models.

Also re: legaltech, see:

  • Pioneers and Pathfinders: Richard Susskind — from seyfarth.com by J. Stephen Poor
    In our conversation, Richard discusses the ways we should all be thinking about legal innovation, the challenges of training lawyers for the future, and the qualifications of those likely to develop breakthrough technologies in law, as well as his own journey and how he became interested in AI as an undergraduate student.

Also re: legaltech, see:

There is an elephant in the room that is rarely discussed. Who owns the IP of AI-generated content?

 

What ChatGPT And Generative AI Mean For Your Business? — from forbes.com by Gil Press [behind a paywall]

Excerpt:

Challenges abound with deploying AI in general but when it comes to generative AI, businesses face a “labyrinth of problems,” according to Forrester: Generating coherent nonsense; recreating biases; vulnerability to new security challenges and attacks; trust, reliability, copyright and intellectual property issues. “Any fair discussion of the value of adopting generative AI,” says Forrester, “must acknowledge its considerable costs. Training and re-training models takes time and money, and the GPUs required to run these workloads remain expensive.”

As is always the case with the latest and greatest enterprise technologies, tools and techniques, the answer to “what’s to be done?” boils down to one word: Learn. Study what your peers have been doing in recent years with generic AI. A good starting point is the just-published All-in On AI: How Smart Companies Win Big with Artificial Intelligence.

Also relevant/see:

Generative AI is here, along with critical legal implications — from venturebeat.com by Nathaniel Bach, Eric Bergner, and Andrea Del-Carmen Gonzalez

Excerpt:

With that promise comes a number of legal implications. For example, what rights and permissions are implicated when a GAI user creates an expressive work based on inputs involving a celebrity’s name, a brand, artwork, and potentially obscene, defamatory or harassing material? What might the creator do with such a work, and how might such use impact the creator’s own legal rights and the rights of others?

This article considers questions like these and the existing legal frameworks relevant to GAI stakeholders.

 

Education is about to radically change: AI for the masses — from gettingsmart.com by Nate McClennen and Rachelle Dené Poth

Key Points:

  • AI already does and will continue to impact education – along with every other sector.
  • Innovative education leaders have an opportunity to build the foundation for the most personalized learning system we have ever seen.

Action

Education leaders need to consider these possible futures now. There is no doubt that K-12 and higher ed learners will be using these tools immediately. It is not a question of preventing “AI plagiarism” (if such a thing could exist), but a question of how to modify teaching to take advantage of these new tools.

From DSC:
They go on to list some solid ideas and experiments to try out — both for students and for teachers. Thanks Nate and Rachelle!


Also from Rachelle, see:


 

NextGen Justice Tech: What regulatory reform could mean for justice tech — from thomsonreuters.com Kristen Sonday

Excerpts (emphasis DSC):

One year in, the Utah Supreme Court had approved 30 companies, including those that created initiatives to provide individuals help completing court forms and receiving legal advice via chatbot.

The ruling is monumental because it allows legal professionals to provide guidance on completing legal forms that might be applied to other areas of law, including through online tools that can reach exponentially more individuals.

“By ruling in favor of Upsolve, the Southern District of New York… established a new First Amendment right in America: the right for low-income families to receive free, vetted, and accountable legal advice from professionals who aren’t lawyers,” said Rohan Pavuluri, Upsolve’s Co-Founder and CEO.

UChicago Medicine partners with legal aid lawyers to offer legal help to victims of violence — from abajournal.com by Debra Cassens Weiss

Excerpt:

The University of Chicago Medicine is working with Legal Aid Chicago to embed lawyers at the system’s trauma center in Chicago’s Hyde Park neighborhood to help victims of violence.

Legal AI: A Lawyer’s New Best Friend? — from legaltechmonitor.com by Stephen Embry

Excerpt:

The real question AI poses for the legal profession, says Susskind, is to what extent machines can be used to reduce uncertainty posed by problems. The fundamental question, says Susskind, is thus what problems lawyers are currently trying to solve that machines can solve better and quicker. The lawyer’s job in the future will be to focus on what clients really want: outcomes. Machines can’t provide outcomes, only reduce the uncertainty surrounding the potential outcomes, according to Susskind.

What Does Copyright Say about Generative Models? Not much. — from oreilly.com by Mike Loukides

Excerpt:

Ultimately we need both solutions: fixing copyright law to accommodate works used to train AI systems, and developing AI systems that respect the rights of the people who made the works on which their models were trained. One can’t happen without the other.

‘Complicit bias’ and ‘lawfare’ among top new legal terms in 2022 — from abajournal.com by Debra Cassens Weiss

Excerpt:

“Complicit bias” tops a list of new legal terms and expressions in 2022 compiled by law professors and academics who are on a committee for Burton’s Legal Thesaurus.

Law360 has a story on the top new terms and their meanings. According to the story, “complicit bias” refers to “an institution or community’s complicity in sustaining discrimination and harassment.”

Law360 listed 10 top legal terms, including these:

Why 2023 Will Be The Year of AI + My First Music Video — from legallydisrupted.com by Zach Abramowitz
ChatGPT Did Not Write This Song

 
© 2024 | Daniel Christian