The rise of AI fake news is creating a ‘misinformation superspreader’ — from washingtonpost.com by Pranshu Verma
AI is making it easy for anyone to create propaganda outlets, producing content that can be hard to differentiate from real news

Artificial intelligence is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminates false information about elections, wars and natural disasters.

Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organization that tracks misinformation.

Historically, propaganda operations have relied on armies of low-paid workers or highly coordinated intelligence organizations to build sites that appear to be legitimate. But AI is making it easy for nearly anyone — whether they are part of a spy agency or just a teenager in their basement — to create these outlets, producing content that is at times hard to differentiate from real news.


AI, and everything else — from pitch.com by Benedict Evans


Chevy Chatbots Go Rogue — from
How a customer service chatbot made a splash on social media; write your holiday cards with AI

Their AI chatbot, designed to assist customers in their vehicle search, became a social media sensation for all the wrong reasons. One user even convinced the chatbot to agree to sell a 2024 Chevy Tahoe for just one dollar!

This story is exactly why AI implementation needs to be approached strategically. Learning to use AI, also means learning to build thinking of the guardrails and boundaries.

Here’s our tips.


Rite Aid used facial recognition on shoppers, fueling harassment, FTC says — from washingtonpost.com by Drew Harwell
A landmark settlement over the pharmacy chain’s use of the surveillance technology could raise further doubts about facial recognition’s use in stores, airports and other venues

The pharmacy chain Rite Aid misused facial recognition technology in a way that subjected shoppers to unfair searches and humiliation, the Federal Trade Commission said Tuesday, part of a landmark settlement that could raise questions about the technology’s use in stores, airports and other venues nationwide.

But the chain’s “reckless” failure to adopt safeguards, coupled with the technology’s long history of inaccurate matches and racial biases, ultimately led store employees to falsely accuse shoppers of theft, leading to “embarrassment, harassment, and other harm” in front of their family members, co-workers and friends, the FTC said in a statement.


 

Prompt engineering — from platform.openai.com

This guide shares strategies and tactics for getting better results from large language models (sometimes referred to as GPT models) like GPT-4. The methods described here can sometimes be deployed in combination for greater effect. We encourage experimentation to find the methods that work best for you.

Some of the examples demonstrated here currently work only with our most capable model, gpt-4. In general, if you find that a model fails at a task and a more capable model is available, it’s often worth trying again with the more capable model.

You can also explore example prompts which showcase what our models are capable of…


Preparedness — from openai.com

The study of frontier AI risks has fallen far short of what is possible and where we need to be. To address this gap and systematize our safety thinking, we are adopting the initial version of our Preparedness Framework. It describes OpenAI’s processes to track, evaluate, forecast, and protect against catastrophic risks posed by increasingly powerful models.


Every Major Tech Development From 2023 — from newsletter.thedailybite.co
The yearly tech round-up, Meta’s smart glasses upgrade, and more…

Here’s every major innovation from the last 365 days:

  • Microsoft: Launched additional OpenAI-powered features, including Copilot for Microsoft Dynamics 365 and Microsoft 365, enhancing business functionalities like text summarization, tone adjustment in emails, data insights, and automatic presentation creation.
  • Google: Introduced Duet, akin to Microsoft’s Copilot, integrating Gen AI across Google Workspace for writing assistance and custom visual creation. Also debuted Generative AI Studio, enabling developers to craft AI apps, and unveiled Gemini & Bard, a new AI technology with impressive features.
  • Salesforce: …
  • Adobe: …
  • Amazon Web Services (AWS): …
  • IBM:  …
  • Nvidia:  …
  • OpenAI:  …
  • Meta (Facebook):
  • Tencent:
  • Baidu:

News in chatbots — from theneurondaily.com by Noah Edelman & Pete Huang

Here’s what’s on the horizon:

  • Multimodal AI gets huge. Instead of just typing, more people will talk to AI, listen to it, create images, get visual feedback, create graphs, and more.
  • AI video gets really good. So far, AI videos have been cool-but-not-practical. They’re getting way better and we’re on the verge of seeing 100% AI-generated films, animations, and cartoons.
  • AI on our phones. Imagine Siri with the brains of ChatGPT-4 and the ambition of Alexa. TBD who pulls this off first!
  • GPT-5. ‘Nuff said.

20 Best AI Chatbots in 2024 — from eweek.com by Aminu Abdullahi
These leading AI chatbots use generative AI to offer a wide menu of functionality, from personalized customer service to improved information retrieval.

Top 20 Generative AI Chatbot Software: Comparison Chart
We compared the key features of the top generative AI chatbot software to help you determine the best option for your company…


What Google Gemini Teaches Us About Trust and The Future — from aiwithallie.beehiiv.com by Allie K. Miller
The AI demo may have been misleading, but it teaches us two huge lessons.

TL;DR (too long, didn’t read)

  1. We’re moving from ‘knowledge’ to ‘action’. 
    AI moving into proactive interventions.
  2. We’re getting more efficient. 
    Assume 2024 brings lower AI OpEx.
  3. It’s multi-modal from here on out. 
    Assume 2024 is multi-modal.
  4. There’s no one model to rule them all.
    Assume 2024 has more multi-model orchestration & delegation.

Stay curious, stay informed,
Allie


Chatbot Power Rankings — from theneurondaily.com by Noah Edelman

Here’s our power rankings of the best chatbots for (non-technical) work:

1: ChatGPT-4Unquestionably the smartest, with the strongest writing, coding, and reasoning abilities.

T1: Gemini Ultra—In theory as powerful as GPT-4. We won’t know for sure until it’s released in 2024.

2: Claude 2Top choice for managing lengthy PDFs (handles ~75,000 words), and rarely hallucinates. Can be somewhat stiff.

3: PerplexityIdeal for real-time information. Upgrading to Pro grants access to both Claude-2 and GPT-4.

T4: PiThe most “human-like” chatbot, though integrating with business data can be challenging.

T4: Bing ChatDelivers GPT-4-esque responses, has internet access, and can generate images. Bad UX and doesn’t support PDFs.

T4: BardNow powered by Gemini Pro, offers internet access and answer verification. Tends to hallucinate more frequently.

and others…


Midjourney + ChatGPT = Amazing AI Art — from theaigirl.substack.com by Diana Dovgopol and the Pycoach
Turn ChatGPT into a powerful Midjourney prompt machine with basic and advanced formulas.


Make music with AI — from aitestkitchen.withgoogle.com re: Music FX


 

 


New Resource Catalogs and Makes Searchable Nearly 600 GPTs Related to Law, Tax and Regulatory Issues — from lawnext.com by Bob Ambrogi

But if you are looking for a law-related GPT, a new site can help. Raymond Blyd, the Amsterdam-based cofounder of Legalpioneer, a site that lists law-related companies, and CEO of Legalcomplex, a company that tracks investments and market data, has uncovered nearly 600 law-related GPTs and made them searchable on a new resource he calls Legalpioneer Copilot.

Blyd (who recently changed the spelling of his last name from Blijd) told me that the GPTs he has found cover a range of legal, regulatory and tax issues, and could be useful for academics, professionals and businesses.


With Launch of New AI Features, LawToolBox Is First Legal App Approved for Use with Copilot for Microsoft 365 — from lawnext.com by Bob Ambrogi


How It Works: AutoNDA, A Free Platform to Automate NDAs Under the Open Source oneNDA Standard — from lawnext.com by Bob Ambrogi

AutoNDA is designed for inhouse legal and business teams that need to streamline and centralize the NDA process. It enables self-serve access for your business teams and gives in-house teams control over outbound NDAs. It also stores and organizes all completed NDAs.


On LawNext: The Law Students Working to End Racism in the Legal System — from lawnext.com by Bob Ambrogi

 

From DSC:
The recent drama over at OpenAI reminds me of how important a few individuals are in influencing the lives of millions of people.

The C-Suites (i.e., the Chief Executive Officers, Chief Financial Officers, Chief Operating Officers, and the like) of companies like OpenAI, Alphabet (Google), Meta (Facebook), Microsoft, Netflix, NVIDIA, Amazon, Apple, and a handful of others have enormous power. Why? Because of the enormous power and reach of the technologies that they create, market, and provide.

We need to be praying for the hearts of those in the C-Suites of these powerful vendors — as well as for their Boards.

LORD, grant them wisdom and help mold their hearts and perspectives so that they truly care about others. May their decisions not be based on making money alone…or doing something just because they can.

What happens in their hearts and minds DOES and WILL continue to impact the rest of us. And we’re talking about real ramifications here. This isn’t pie-in-the-sky thinking or ideas. This is for real. With real consequences. If you doubt that, go ask the families of those whose sons and daughters took their own lives due to what happened out on social media platforms. Disclosure: I use LinkedIn and Twitter quite a bit. I’m not bashing these platforms per se. But my point is that there are real impacts due to a variety of technologies. What goes on in the hearts and minds of the leaders of these tech companies matters.


Some relevant items:

Navigating Attention-Driving Algorithms, Capturing the Premium of Proximity for Virtual Teams, & New AI Devices — from implactions.com by Scott Belsky

Excerpts (emphasis DSC):

No doubt, technology influences us in many ways we don’t fully understand. But one area where valid concerns run rampant is the attention-seeking algorithms powering the news and media we consume on modern platforms that efficiently polarize people. Perhaps we’ll call it The Law of Anger Expansion: When people are angry in the age of algorithms, they become MORE angry and LESS discriminate about who and what they are angry at.

Algorithms that optimize for grabbing attention, thanks to AI, ultimately drive polarization.

The AI learns quickly that a rational or “both sides” view is less likely to sustain your attention (so you won’t get many of those, which drives the sensation that more of the world agrees with you). But the rage-inducing stuff keeps us swiping.

Our feeds are being sourced in ways that dramatically change the content we’re exposed to.

And then these algorithms expand on these ultimately destructive emotions – “If you’re afraid of this, maybe you should also be afraid of this” or “If you hate those people, maybe you should also hate these people.”

How do we know when we’ve been polarized? This is the most important question of the day.

Whatever is inflaming you is likely an algorithm-driven expansion of anger and an imbalance of context.


 

 

Be My Eyes AI offers GPT-4-powered support for blind Microsoft customers — from theverge.com by Sheena Vasani
The tech giant’s using Be My Eyes’ visual assistant tool to help blind users quickly resolve issues without a human agent.


From DSC:
Speaking of Microsoft and AI:

 

The Beatles’ final song is now streaming thanks to AI — from theverge.com by Chris Welch
Machine learning helped Paul McCartney and Ringo Starr turn an old John Lennon demo into what’s likely the band’s last collaborative effort.


Scientists excited by AI tool that grades severity of rare cancer — from bbc.com by Fergus Walsh

Artificial intelligence is nearly twice as good at grading the aggressiveness of a rare form of cancer from scans as the current method, a study suggests.

By recognising details invisible to the naked eye, AI was 82% accurate, compared with 44% for lab analysis.

Researchers from the Royal Marsden Hospital and Institute of Cancer Research say it could improve treatment and benefit thousands every year.

They are also excited by its potential for spotting other cancers early.


Microsoft unveils ‘LeMa’: A revolutionary AI learning method mirroring human problem solving — from venturebeat.com by Michael Nuñez

Researchers from Microsoft Research Asia, Peking University, and Xi’an Jiaotong University have developed a new technique to improve large language models’ (LLMs) ability to solve math problems by having them learn from their mistakes, akin to how humans learn.

The researchers have revealed a pioneering strategy, Learning from Mistakes (LeMa), which trains AI to correct its own mistakes, leading to enhanced reasoning abilities, according to a research paper published this week.

Also from Michael Nuñez at venturebeat.com, see:


GPTs for all, AzeemBot; conspiracy theorist AI; big tech vs. academia; reviving organs ++448 — from exponentialviewco by Azeem Azhar and Chantal Smith


Personalized A.I. Agents Are Here. Is the World Ready for Them? — from ytimes.com by Kevin Roose (behind a paywall)

You could think of the recent history of A.I. chatbots as having two distinct phases.

The first, which kicked off last year with the release of ChatGPT and continues to this day, consists mainly of chatbots capable of talking about things. Greek mythology, vegan recipes, Python scripts — you name the topic and ChatGPT and its ilk can generate some convincing (if occasionally generic or inaccurate) text about it.

That ability is impressive, and frequently useful, but it is really just a prelude to the second phase: artificial intelligence that can actually do things. Very soon, tech companies tell us, A.I. “agents” will be able to send emails and schedule meetings for us, book restaurant reservations and plane tickets, and handle complex tasks like “negotiate a raise with my boss” or “buy Christmas presents for all my family members.”


From DSC:
Very cool!


Nvidia Stock Jumps After Unveiling of Next Major AI Chip. It’s Bad News for Rivals. — from barrons.com

On Monday, Nvidia (ticker: NVDA) announced its new H200 Tensor Core GPU. The chip incorporates 141 gigabytes of memory and offers up to 60% to 90% performance improvements versus its current H100 model when used for inference, or generating answers from popular AI models.

From DSC:
The exponential curve seems to be continuing — 60% to 90% performance improvements is a huge boost in performance.

Also relevant/see:


The 5 Best GPTs for Work — from the AI Exchange

Custom GPTs are exploding, and we wanted to highlight our top 5 that we’ve seen so far:

 

A future-facing minister, a young inventor and a shared vision: An AI tutor for every student — from news.microsoft.com by Chris Welsch

The Ministry of Education and Pativada see what has become known as the U.A.E. AI Tutor as a way to provide students with 24/7 assistance as well as help level the playing field for those families who cannot afford a private tutor. At the same time, the AI Tutor would be an aid to teachers, they say. “We see it as a tool that will support our teachers,” says Aljughaiman. “This is a supplement to classroom learning.”

If everything goes according to plan, every student in the United Arab Emirates’ school system will have a personal AI tutor – that fits in their pockets.

It’s a story that involves an element of coincidence, a forward-looking education minister and a tech team led by a chief executive officer who still lives at home with his parents.

In February 2023, the U.A.E.’s education minister, His Excellency Dr. Ahmad Belhoul Al Falasi, announced that the ministry was embracing AI technology and pursuing the idea of an AI tutor to help Emirati students succeed. And he also announced that the speech he presented had been written by ChatGPT. “We should not demonize AI,” he said at the time.



Fostering deep learning in humans and amplifying our intelligence in an AI World — from stefanbauschard.substack.com by Stefan Bauschard
A free 288-page report on advancements in AI and related technology, their effects on education, and our practical support for AI-amplified human deep learning

Six weeks ago, Dr. Sabba Quidwai and I accidentally stumbled upon an idea to compare the deep learning revolution in computer science to the mostly lacking deep learning efforts in education (Mehta & Fine). I started writing, and as these things often go with me, I thought there were many other things that would be useful to think through and for educators to know, and we ended up with this 288-page report.

***

Here’s an abstract from that report:

This report looks at the growing gap between the attention paid to the development of intelligence in machines and humans. While computer scientists have made great strides in developing human intelligence capacities in machines using deep learning technologies, including the abilities of machines to learn on their own, a significant part of the education system has not kept up with developing the intelligence capabilities in people that will enable them to succeed in the 21st century. Instead of fully embracing pedagogical methods that place primary emphasis on promoting collaboration, critical thinking, communication, creativity, and self-learning through experiential, interdisciplinary approaches grounded in human deep learning and combined with current technologies, a substantial portion of the educational system continues to heavily rely on traditional instructional methods and goals. These methods and goals prioritize knowledge acquisition and organization, areas in which machines already perform substantially better than people.

Also from Stefan Bauschard, see:

  • Debating in the World of AI
    Performative assessment, learning to collaborate with humans and machines, and developing special human qualities

13 Nuggets of AI Wisdom for Higher Education Leaders — from jeppestricker.substack.com by Jeppe Klitgaard Stricker
Actionable AI Guidance for Higher Education Leaders

Incentivize faculty AI innovation with AI. 

Invest in people first, then technology. 

On teaching, learning, and assessment. AI has captured the attention of all institutional stakeholders. Capitalize to reimagine pedagogy and evaluation. Rethink lectures, examinations, and assignments to align with workforce needs. Consider incorporating Problem-Based Learning, building portfolios and proof of work, and conducting oral exams. And use AI to provide individualized support and assess real-world skills.

Actively engage students.


Some thoughts from George Siemens re: AI:

Sensemaking, AI, and Learning (SAIL), a regular look at how AI is impacting learning.

Our education system has a uni-dimensional focus: learning things. Of course, we say we care about developing the whole learner, but the metrics that matter (grade, transcripts) that underpin the education system are largely focused on teaching students things that have long been Google-able but are now increasingly doable by AI. Developments in AI matters in ways that calls into question large parts of what happens in our universities. This is not a statement that people don’t need to learn core concepts and skills. My point is that the fulcrum of learning has shifted. Knowing things will continue to matter less and less going forward as AI improves its capabilities. We’ll need to start intentionally developing broader and broader attributes of learners: metacognition, wellness, affect, social engagement, etc. Education will continue to shift toward human skills and away from primary assessment of knowledge gains disconnected from skills and practice and ways of being.


AI, the Next Chapter for College Librarians — from insidehighered.com by Lauren Coffey
Librarians have lived through the disruptions of fax machines, websites and Wikipedia, and now they are bracing to do it again as artificial intelligence tools go mainstream: “Maybe it’s our time to shine.”

A few months after ChatGPT launched last fall, faculty and students at Northwestern University had many questions about the building wave of new artificial intelligence tools. So they turned to a familiar source of help: the library.

“At the time it was seen as a research and citation problem, so that led them to us,” said Michelle Guittar, head of instruction and curriculum support at Northwestern University Libraries.

In response, Guittar, along with librarian Jeanette Moss, created a landing page in April, “Using AI Tools in Your Research.” At the time, the university itself had yet to put together a comprehensive resource page.


From Dr. Nick Jackson’s recent post on LinkedIn: 

Last night the Digitech team of junior and senior teachers from Scotch College Adelaide showcased their 2023 experiments, innovation, successes and failures with technology in education. Accompanied by Student digital leaders, we saw the following:

  •  AI used for languagelearning where avatars can help with accents
  • Motioncapture suits being used in mediastudies
  • AI used in assessment and automatic grading of work
  • AR used in designtechnology
  • VR used for immersive Junior school experiences
  • A teacher’s AI toolkit that has changed teaching practice and workflow
  • AR and the EyeJack app used by students to create dynamic art work
  • VR use in careers education in Senior school
  • How ethics around AI is taught to Junior school students from Year 1
  • Experiments with MyStudyWorks

Almost an Agent: What GPTs can do — from oneusefulthing.org by Ethan Mollick

What would a real AI agent look like? A simple agent that writes academic papers would, after being given a dataset and a field of study, read about how to compose a good paper, analyze the data, conduct a literature review, generate hypotheses, test them, and then write up the results, all without intervention. You put in a request, you get a Word document that contains a draft of an academic paper.

A process kind of like this one:


What I Learned From an Experiment to Apply Generative AI to My Data Course — from edsurge.com by Wendy Castillo

As an educator, I have a duty to remain informed about the latest developments in generative AI, not only to ensure learning is happening, but to stay on top of what tools exist, what benefits and limitations they have, and most importantly, how students might be using them.

However, it’s also important to acknowledge that the quality of work produced by students now requires higher expectations and potential adjustments to grading practices. The baseline is no longer zero, it is AI. And the upper limit of what humans can achieve with these new capabilities remains an unknown frontier.


Artificial Intelligence in Higher Education: Trick or Treat? — from tytonpartners.com by Kristen Fox and Catherine Shaw
.

Two components of AI -- generative AI and predictive AI

 

Nearly half of CEOs believe that AI not only could—but should—replace their own jobs — from finance.yahoo.com by Orianna Rosa Royle; via Harsh Makadia

Researchers from edX, an education platform for upskilling workers, conducted a survey involving over 1,500 executives and knowledge workers. The findings revealed that nearly half of CEOs believe AI could potentially replace “most” or even all aspects of their own positions.

What’s even more intriguing is that 47% of the surveyed executives not only see the possibility of AI taking over their roles but also view it as a desirable development.

Why? Because they anticipate that AI could rekindle the need for traditional leadership for those who remain.

“Success in the CEO role hinges on effective leadership, and AI can liberate time for this crucial aspect of their role,” Andy Morgan, Head of edX for Business comments on the findings.

“CEOs understand that time saved on routine tasks can stimulate innovation, nurture creativity, and facilitate essential upskilling for their teams, fostering both individual and organizational success,” he adds.

But CEOs already know this: EdX’s research echoed that 79% of executives fear that if they don’t learn how to use AI, they’ll be unprepared for the future of work.

From DSC:
By the way, my first knee-jerk reaction to this was:

WHAT?!?!?!? And this from people who earn WAAAAY more than the average employee, no doubt.

After a chance to calm down a bit, I see that the article does say that CEOs aren’t going anywhere. Ah…ok…got it.


Strange Ways AI Disrupts Business Models, What’s Next For Creativity & Marketing, Some Provocative Data — from .implications.com by Scott Belsky
In this edition, we explore some of the more peculiar ways that AI may change business models as well as recent releases for the world of creativity and marketing.

Time-based business models are liable for disruption via a value-based overhaul of compensation. Today, as most designers, lawyers, and many trades in between continue to charge by the hour, the AL-powered step-function improvements in workflows are liable to shake things up.

In such a world, time-based billing simply won’t work anymore unless the value derived from these services is also compressed by a multiple (unlikely). The classic time-based model of billing for lawyers, designers, consultants, freelancers etc is officially antiquated. So, how might the value be captured in a future where we no longer bill by the hour? …

The worlds of creativity and marketing are rapidly changing – and rapidly coming together

#AI #businessmodels #lawyers #billablehour

It becomes clear that just prompting to get images is a rather elementary use case of AI, compared to the ability to place and move objects, change perspective, adjust lighting, and many other actions using AI.



AlphaFold DB provides open access to over 200 million protein structure predictions to accelerate scientific research. — from

AlphaFold is an AI system developed by DeepMind that predicts a protein’s 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiment.


After 25 years of growth for the $68 billion SEO industry, here’s how Google and other tech firms could render it extinct with AI — from fortune.com by Ravi Sen and The Conversation

But one other consequence is that I believe it may destroy the $68 billion search engine optimization industry that companies like Google helped create.

For the past 25 years or so, websites, news outlets, blogs and many others with a URL that wanted to get attention have used search engine optimization, or SEO, to “convince” search engines to share their content as high as possible in the results they provide to readers. This has helped drive traffic to their sites and has also spawned an industry of consultants and marketers who advise on how best to do that.

As an associate professor of information and operations management, I study the economics of e-commerce. I believe the growing use of generative AI will likely make all of that obsolete.


ChatGPT Plus members can upload and analyze files in the latest beta — from theverge.com by Wes Davis
ChatGPT Plus members can also use modes like Browse with Bing without manually switching, letting the chatbot decide when to use them.

OpenAI is rolling out new beta features for ChatGPT Plus members right now. Subscribers have reported that the update includes the ability to upload files and work with them, as well as multimodal support. Basically, users won’t have to select modes like Browse with Bing from the GPT-4 dropdown — it will instead guess what they want based on context.


Google agrees to invest up to $2 billion in OpenAI rival Anthropic — from reuters.com by Krystal Hu

Oct 27 (Reuters) – Alphabet’s (GOOGL.O) Google has agreed to invest up to $2 billion in the artificial intelligence company Anthropic, a spokesperson for the startup said on Friday.

The company has invested $500 million upfront into the OpenAI rival and agreed to add $1.5 billion more over time, the spokesperson said.

Google is already an investor in Anthropic, and the fresh investment would underscore a ramp-up in its efforts to better compete with Microsoft (MSFT.O), a major backer of ChatGPT creator OpenAI, as Big Tech companies race to infuse AI into their applications.


 

 

Thinking with Colleagues: AI in Education — from campustechnology.com by Mary Grush
A Q&A with Ellen Wagner

Wagner herself recently relied on the power of collegial conversations to probe the question: What’s on the minds of educators as they make ready for the growing influence of AI in higher education? CT asked her for some takeaways from the process.

We are in the very early days of seeing how AI is going to affect education. Some of us are going to need to stay focused on the basic research to test hypotheses. Others are going to dive into laboratory “sandboxes” to see if we can build some new applications and tools for ourselves. Still others will continue to scan newsletters like ProductHunt every day to see what kinds of things people are working on. It’s going to be hard to keep up, to filter out the noise on our own. That’s one reason why thinking with colleagues is so very important.

Mary and Ellen linked to “What Is Top of Mind for Higher Education Leaders about AI?” — from northcoasteduvisory.com. Below are some excerpts from those notes:

We are interested how K-12 education will change in terms of foundational learning. With in-class, active learning designs, will younger students do a lot more intensive building of foundational writing and critical thinking skills before they get to college?

  1. The Human in the Loop: AI is built using math: think of applied statistics on steroids. Humans will be needed more than ever to manage, review and evaluate the validity and reliability of results. Curation will be essential.
  2. We will need to generate ideas about how to address AI factors such as privacy, equity, bias, copyright, intellectual property, accessibility, and scalability.
  3. Have other institutions experimented with AI detection and/or have held off on emerging tools related to this? We have just recently adjusted guidance and paused some tools related to this given the massive inaccuracies in detection (and related downstream issues in faculty-elevated conduct cases)

Even though we learn repeatedly that innovation has a lot to do with effective project management and a solid message that helps people understand what they can do to implement change, people really need innovation to be more exciting and visionary than that.  This is the place where we all need to help each other stay the course of change. 


Along these lines, also see:


What people ask me most. Also, some answers. — from oneusefulthing.org by Ethan Mollick
A FAQ of sorts

I have been talking to a lot of people about Generative AI, from teachers to business executives to artists to people actually building LLMs. In these conversations, a few key questions and themes keep coming up over and over again. Many of those questions are more informed by viral news articles about AI than about the real thing, so I thought I would try to answer a few of the most common, to the best of my ability.

I can’t blame people for asking because, for whatever reason, the companies actually building and releasing Large Language Models often seem allergic to providing any sort of documentation or tutorial besides technical notes. I was given much better documentation for the generic garden hose I bought on Amazon than for the immensely powerful AI tools being released by the world’s largest companies. So, it is no surprise that rumor has been the way that people learn about AI capabilities.

Currently, there are only really three AIs to consider: (1) OpenAI’s GPT-4 (which you can get access to with a Plus subscription or via Microsoft Bing in creative mode, for free), (2) Google’s Bard (free), or (3) Anthropic’s Claude 2 (free, but paid mode gets you faster access). As of today, GPT-4 is the clear leader, Claude 2 is second best (but can handle longer documents), and Google trails, but that will likely change very soon when Google updates its model, which is rumored to be happening in the near future.

 

Next month Microsoft Corp. will start making its artificial intelligence features for Office widely available to corporate customers. Soon after, that will include the ability for it to read your emails, learn your writing style and compose messages on your behalf.

From DSC:
As readers of this blog know, I’m generally pro-technology. I see most technologies as tools — which can be used for good or for ill. So I will post items both pro and con concerning AI.

But outsourcing email communications to AI isn’t on my wish list or to-do list.

 

Reimagining Hiring and Learning with the Power of AI — from linkedin.com by Hari Srinivasan

That’s why today we’re piloting new tools like our new release of Recruiter 2024 and LinkedIn Learning’s AI-powered coaching experience to help with some of the heavy lifting so HR professionals can focus on what matters most.

“AI is quickly transforming recruitment, training, and many other HR practices,” says Josh Bersin, industry analyst and CEO of The Josh Bersin Company. “LinkedIn’s new features in Recruiter 2024 and LinkedIn Learning can massively improve recruiter productivity and help all employees build the skills they need to grow in their careers.”

By pairing generative AI with our unique insights gained from the more than 950 million professionals, 65 million companies, and 40,000 skills on our platform, we’ve reimagined our Recruiter product to help our customers find that short list of qualified candidates — faster.

From DSC:
While I’m very interested to see how Microsoft’s AI-powered LinkedIn Learning coach will impact peoples’ growth/development, I need to admit that I still approach AI and hiring/finding talent with caution. I’m sure I was weeded out by several Applicant Tracking Systems (ATS) back in 2017 when I was looking for my next position — and I only applied to positions that I had the qualifications for. And if you’ve tried to get a job recently, I bet you were weeded out by an ATS as well. So while this might help recruiters, the jury is still out for me as to whether these developments are good or bad for the rest of society.

Traditional institutions of higher education may want to research these developments to see which SKILLS are in demand.

Also relevant/see:

LinkedIn Launches Exciting Gen AI Features in Recruiter and Learning — from joshbersin.com by Josh Bersin

This week LinkedIn announced some massive Gen AI features in its two flagship products: LinkedIn Recruiter and LinkedIn Learning. Let me give you an overview.

LinkedIn goes big on new AI tools for learning, recruitment, marketing and sales, powered by OpenAI — from techcrunch.com by Ingrid Lunden

LinkedIn Learning will be incorporating AI in the form of a “learning coach” that is essentially built as a chatbot. Initially the advice that it will give will be trained on suggestions and tips, and it will be firmly in the camp of soft skills. One example: “How can I delegate tasks and responsibility effectively?”

The coach might suggest actual courses, but more importantly, it will actually also provide information, and advice, to users. LinkedIn itself has a giant catalogue of learning videos, covering both those soft skills but also actual technical skills and other knowledge needed for specific jobs. It will be interesting to see if LinkedIn extends the coach to covering that material, too.

 

 

ChatGPT can now see, hear, and speak — from openai.com
We are beginning to roll out new voice and image capabilities in ChatGPT. They offer a new, more intuitive type of interface by allowing you to have a voice conversation or show ChatGPT what you’re talking about.

Voice and image give you more ways to use ChatGPT in your life. Snap a picture of a landmark while traveling and have a live conversation about what’s interesting about it. When you’re home, snap pictures of your fridge and pantry to figure out what’s for dinner (and ask follow up questions for a step by step recipe). After dinner, help your child with a math problem by taking a photo, circling the problem set, and having it share hints with both of you.

We’re rolling out voice and images in ChatGPT to Plus and Enterprise users over the next two weeks. Voice is coming on iOS and Android (opt-in in your settings) and images will be available on all platforms.





OpenAI Seeks New Valuation of Up to $90 Billion in Sale of Existing Shares — from wsj.com (behind paywall)
Potential sale would value startup at roughly triple where it was set earlier this year


The World’s First AI Cinema Experience Starring YOU Is Open In NZ And Buzzy Doesn’t Cover It — from theedge.co.nz by Seth Gupwell
Allow me to manage your expectations.

Because it’s the first-ever on Earth, it’s hard to label what kind of entertainment Hypercinema is. While it’s marketed as a “live AI experience” that blends “theatre, film and digital technology”, Dr. Gregory made it clear that it’s not here to make movies and TV extinct.

Your face and personality are how HyperCinema sets itself apart from the art forms of old. You get 15 photos of your face taken from different angles, then answer a questionnaire – mine started by asking what my fave vegetable was and ended by demanding to know what I thought the biggest threat to humanity was. Deep stuff, but the questions are always changing, cos that’s how AI rolls.

All of this information is stored on your cube – a green, glowing accessory that you carry around for the whole experience and insert into different sockets to transfer your info onto whatever screen is in front of you. Upon inserting your cube, the “live AI experience” starts.

The AI has taken your photos and superimposed your face on a variety of made-up characters in different situations.


Announcing Microsoft Copilot, your everyday AI companion — from blogs.microsoft.com by Yusuf Mehdi

We are entering a new era of AI, one that is fundamentally changing how we relate to and benefit from technology. With the convergence of chat interfaces and large language models you can now ask for what you want in natural language and the technology is smart enough to answer, create it or take action. At Microsoft, we think about this as having a copilot to help navigate any task. We have been building AI-powered copilots into our most used and loved products – making coding more efficient with GitHub, transforming productivity at work with Microsoft 365, redefining search with Bing and Edge and delivering contextual value that works across your apps and PC with Windows.

Today we take the next step to unify these capabilities into a single experience we call Microsoft Copilot, your everyday AI companion. Copilot will uniquely incorporate the context and intelligence of the web, your work data and what you are doing in the moment on your PC to provide better assistance – with your privacy and security at the forefront.


DALL·E 3 understands significantly more nuance and detail than our previous systems, allowing you to easily translate your ideas into exceptionally accurate images.
DALL·E 3 is now in research preview, and will be available to ChatGPT Plus and Enterprise customers in October, via the API and in Labs later this fall.


 
 

The Rumors Were True: Thomson Reuters Acquires Casetext for $650M Cash — from lawnext.com by Bob Ambrogi

Thomson Reuters Acquires Casetext for $650M Cash

Excerpt:

“The proposed transaction will complement Thomson Reuters existing AI roadmap and builds on its recent initiatives, including a commitment to invest more than $100 million annually on AI capabilities, the development of new generative AI experiences across its product suite, as well as a new plugin with Microsoft and Microsoft 365 Copilot for legal professionals,” TR’s announcement said.

From DSC:
I post this to show how AI continues to make inroads into the legal realm — and how serious vendors are about it. I believe AI-enabled applications will eventually increase access to justice for the citizens of the United States of America.


Below is an addendum on 6/28/23 that further illustrates how serious vendors are about AI:

Databricks picks up MosaicML, an OpenAI competitor, for $1.3B — from techcrunch.com by Ingrid Lunden

Excerpt:

Databricks announced it will pay $1.3 billion to acquire MosaicML, an open source startup with neural networks expertise that has built a platform for organizations to train large language models and deploy generative AI tools based on them.

 

Accenture announces jaw-dropping $3 billion investment in AI — from venturebeat.com by Carl Franzen; via Superhuman

Excerpt:

The generative AI announcements are coming fast and furious these days, but among the biggest in terms of sheer dollar commitments just landed: Accenture, the global professional services and consulting giant, today announced it will invest $3 billion (with a “b”!) in AI over the next three years in building out its team of AI professionals and AI-focused solutions for its clients.

“There is unprecedented interest in all areas of AI, and the substantial investment we are making in our Data & AI practice will help our clients move from interest to action to value, and in a responsible way with clear business cases,” said Julie Sweet, Accenture’s chairwoman and CEO.

Also related/see:

Artificial intelligence creates 40,000 new roles at Accenture — from computerweekly.com by Karl Flinders
Accenture is planning to add thousands of AI experts to its workforce as part of a $3bn investment in its data and artificial intelligence practice

Why leaders need to evolve alongside generative AI — from fastcompany.com by Kelsey Behringer
Even if you’re not an educator, you should not be sitting on the sidelines watching the generative AI conversation being had around you—hop in.

Excerpts (emphasis DSC):

Leaders should be careful to watch and support education right now. At the end of the day, the students sitting in K-12 and college classrooms are going to be future CPAs, lawyers, writers, and teachers. If you are parenting a child, you have skin in the game. If you use professional services, you have skin in the game. When it comes to education, we all have skin in the game.

Students need to master fundamental skills like editing, questioning, researching, and verifying claims before they can use generative AI exceptionally well.

GenAI & Education: Enhancement, not Replacement — from drphilippahardman.substack.com by Dr. Philipa Hardman
How to co-exist in the age of automation

Excerpts (emphasis DSC):

[On 6/15/23, I joined] colleagues from OpenAI, Google, Microsoft, Stanford, Harvard and other others at the first meeting of the GenAI Summit. Our shared goal [was] to help to educate universities & schools in Europe about the impact of Generative AI on their work.

how can we effectively communicate to education professionals that generative AI will enhance their work rather than replace them?

A recent controlled study found that ChatGPT can help professionals increase their efficiency in routine tasks by ~35%. If we keep in mind that the productivity gains brought by the steam engine in the nineteenth century was ~25%, this is huge.

As educators, we should embrace the power of ChatGPT to automate the repetitive tasks which we’ve been distracted by for decades. Lesson planning, content creation, assessment design, grading and feedback – generative AI can help us to do all of these things faster than ever before, freeing us up to focus on where we bring most value for our students.

Google, one of AI’s biggest backers, warns own staff about chatbots — from reuters.com by Jeffrey Dastin and Anna Tong

Excerpt:

SAN FRANCISCO, June 15 (Reuters) – Alphabet Inc (GOOGL.O) is cautioning employees about how they use chatbots, including its own Bard, at the same time as it markets the program around the world, four people familiar with the matter told Reuters.

The Google parent has advised employees not to enter its confidential materials into AI chatbots, the people said and the company confirmed, citing long-standing policy on safeguarding information.

The economic potential of generative AI: The next productivity frontier — from mckinsey.com
Generative AI’s impact on productivity could add trillions of dollars in value to the global economy—and the era is just beginning.



Preparing for the Classrooms and Workplaces of the Future: Generative AI in edX — from campustechnology.com by Mary Grush
A Q&A with Anant Agarwal


Adobe Firefly for the Enterprise — Dream Bigger with Adobe Firefly.
Dream it, type it, see it with Firefly, our creative generative AI engine. Now in Photoshop (beta), Illustrator, Adobe Express, and on the web.


Apple Vision Pro, Higher Education and the Next 10 Years — from insidehighered.com by Joshua Kim
How this technology will play out in our world over the next decade.



Zoom can now give you AI summaries of the meetings you’ve missed — from theverge.com by Emma Roth


Mercedes-Benz Is Adding ChatGPT to Cars for AI Voice Commands — from decrypt.co by Jason Nelson; via Superhuman
The luxury automaker is set to integrate OpenAI’s ChatGPT chatbot into its Mercedes-Benz User Experience (MBUX) feature in the U.S.


 
© 2024 | Daniel Christian