Enter the New Era of Mobile AI With Samsung Galaxy S24 Series — from news.samsung.com

Galaxy AI introduces meaningful intelligence aimed at enhancing every part of life, especially the phone’s most fundamental role: communication. When you need to defy language barriers, Galaxy S24 makes it easier than ever. Chat with another student or colleague from abroad. Book a reservation while on vacation in another country. It’s all possible with Live Translate,2 two-way, real-time voice and text translations of phone calls within the native app. No third-party apps are required, and on-device AI keeps conversations completely private.

With Interpreter, live conversations can be instantly translated on a split-screen view so people standing opposite each other can read a text transcription of what the other person has said. It even works without cellular data or Wi-Fi.


Galaxy S24 — from theneurondaily.com by Noah Edelman & Pete Huang

Samsung just announced the first truly AI-powered smartphone: the Galaxy S24.


For us AI power users, the features aren’t exactly new, but it’s the first time we’ve seen them packaged up into a smartphone (Siri doesn’t count, sorry).


Samsung’s Galaxy S24 line arrives with camera improvements and generative AI tricks — from techcrunch.com by Brian Heater
Starting at $800, the new flagships offer brighter screens and a slew of new photo-editing tools

 

OpenAI announces first partnership with a university — from cnbc.com by Hayden Field

Key Points:

  • OpenAI on Thursday announced its first partnership with a higher education institution.
  • Starting in February, Arizona State University will have full access to ChatGPT Enterprise and plans to use it for coursework, tutoring, research and more.
  • The partnership has been in the works for at least six months.
  • ASU plans to build a personalized AI tutor for students, allow students to create AI avatars for study help and broaden the university’s prompt engineering course.

A new collaboration with OpenAI charts the future of AI in higher education — from news.asu.edu

The collaboration between ASU and OpenAI brings the advanced capabilities of ChatGPT Enterprise into higher education, setting a new precedent for how universities enhance learning, creativity and student outcomes.

“ASU recognizes that augmented and artificial intelligence systems are here to stay, and we are optimistic about their ability to become incredible tools that help students to learn, learn more quickly and understand subjects more thoroughly,” ASU President Michael M. Crow said. “Our collaboration with OpenAI reflects our philosophy and our commitment to participating directly to the responsible evolution of AI learning technologies.”


AI <> Academia — from drphilippahardman.substack.com by Dr. Philippa Hardman
What might emerge from ASU’s pioneering partnership with OpenAI?

Phil’s Wish List #2: Smart Curriculum Development
ChatGPT assists in creating and updating course curricula, based on both student data and emerging domain and pedagogical research on the topic.

Output: using AI it will be possible to review course content and make data-informed automate recommendations based on latest pedagogical and domain-specific research

Potential Impact: increased dynamism and relevance in course content and reduced administrative lift for academics.


A full list of AI ideas from AI for Education dot org

A full list of AI ideas from AI-for-Education.org

You can filter by category, by ‘What does it do?’, by AI tool or search for keywords.


Navigating the new normal: Adapting in the age of AI and hybrid work models — from chieflearningofficer.com by Dr. Kylie Ensrud

Unlike traditional leadership, adaptable leadership is not bound by rigid rules and protocols. Instead, it thrives on flexibility. Adaptable leaders are willing to experiment, make course corrections, and pivot when necessary. Adaptable leadership is about flexibility, resilience and a willingness to embrace change. It embodies several key principles that redefine the role of leaders in organizations:

  1. Embracing uncertainty

Adaptable leaders understand that uncertainty is the new norm. They do not shy away from ambiguity but instead, see it as an opportunity for growth and innovation. They encourage a culture of experimentation and learning from failure.

  1. Empowering teams

Instead of dictating every move, adaptable leaders empower their teams to take ownership of their work. They foster an environment of trust and collaboration, enabling individuals to contribute their unique perspectives and skills.

  1. Continuous learning

Adaptable leaders are lifelong learners. They are constantly seeking new knowledge, stay informed about industry trends and encourage their teams to do the same. They understand that knowledge is a dynamic asset that must be constantly updated.


Major AI in Education Related Developments this week — from stefanbauschard.substack.com by Stefan Bauschard
ASU integrates with ChatGPT, K-12 AI integrations, Agents & the Rabbit, Uruguay, Meta and AGI, Rethinking curriculum

“The greatest risk is leaving school curriculum unchanged when the entire world is changing.”
Hadi Partovi, founder Code.org, Angel investor in Facebook, DropBox, AirBnb, Uber

Tutorbots in college. On a more limited scale, Georgia State University, Morgan State University, and the University of Central Florida are piloting a project using chatbots to support students in foundational math and English courses.


Pioneering AI-Driven Instructional Design in Small College Settings — from campustechnology.com by Gopu Kiron
For institutions that lack the budget or staff expertise to utilize instructional design principles in online course development, generative AI may offer a way forward.

Unfortunately, smaller colleges — arguably the institutions whose students are likely to benefit the most from ID enhancements — frequently find themselves excluded from authentically engaging in the ID arena due to tight budgets, limited faculty online course design expertise, and the lack of ID-specific staff roles. Despite this, recent developments in generative AI may offer these institutions a low-cost, tactical avenue to compete with more established players.


Google’s new AI solves math olympiad problems — from bensbites.beehiiv.com

There’s a new AI from Google DeepMind called AlphaGeometry that totally nails solving super hard geometry problems. We’re talking problems so tough only math geniuses who compete in the International Mathematical Olympiad can figure them out.


 

The biggest things that happened in AI this year — from superhuman.ai by Zain Kahn

January:

  • Microsoft raises eyebrows with a huge $10 Billion investment in OpenAI.

February:

  • Meta launches Llama 2, their open-source rival to OpenAI’s models.
  • OpenAI announces ChatGPT Plus, a paid version of their chatbot.
  • Microsoft announces a new AI-powered Bing Search.

March:

  • OpenAI announces the powerful GPT-4 model, still considered to be the gold standard.
  • Midjourney releases V5, which brings AI-powered image generation one step closer to reality.
  • Microsoft launches Copilot for Microsoft 365.
  • Google launches Bard, its rival to ChatGPT.

…and more


AI 2023: A Year in Review — from stefanbauschard.substack.com by Stefan Bauschard
2023 developments in AI and a hint of what they are building toward

Some of the items that Stefan includes in his posting include:

  • ChatGPT and other language models that generate text.
  • Image generators.
  • Video generators.
  • AI models that that can read, hear, and speak.
  • AI models that can see.
  • Improving models.
  • “Multimodal” models.
  • Training on specific content.
  • Reasoning & planning.
  • …and several others

The Dictionary.com Word of the Year is “hallucinate.” — from content.dictionary.com by Nick Norlen and Grant Barrett; via The Rundown AI

hallucinate
[ huhloo-suh-neyt ]

verb
(of artificial intelligence) to produce false information contrary to the intent of the user and present it as if true and factual. Example: When chatbots hallucinate, the result is often not just inaccurate but completely fabricated.


Soon, every employee will be both AI builder and AI consumer — from zdnet.com by Joe McKendrick, via Robert Gibson on LinkedIn
“Standardized tools and platforms as well as advanced low- or no-code tech may enable all employees to become low-level engineers,” suggests a recent report.

The time could be ripe for a blurring of the lines between developers and end-users, a recent report out of Deloitte suggests. It makes more business sense to focus on bringing in citizen developers for ground-level programming, versus seeking superstar software engineers, the report’s authors argue, or — as they put it — “instead of transforming from a 1x to a 10x engineer, employees outside the tech division could be going from zero to one.”

Along these lines, see:

  • TECH TRENDS 2024 — from deloitte.com
    Six emerging technology trends demonstrate that in an age of generative machines, it’s more important than ever for organizations to maintain an integrated business strategy, a solid technology foundation, and a creative workforce.

UK Supreme Court rules AI is not an inventor — from theverge.com by Emilia David

The ruling follows a similar decision denying patent registrations naming AI as creators.

The UK Supreme Court ruled that AI cannot get patents, declaring it cannot be named as an inventor of new products because the law considers only humans or companies to be creators.


The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work — from nytimes.com by Michael M. Grynbaum and Ryan Mac

The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.

On this same topic, also see:


Apple’s iPhone Design Chief Enlisted by Jony Ive, Sam Altman to Work on AI Devices — from bloomberg.com by Mark Gurman (behind paywall)

  • Design executive Tang Tan is set to leave Apple in February
  • Tan will join Ive’s LoveFrom design studio, work on AI project

AI 2023: Chatbots Spark New Tools — from heatherbcooper.substack.com by Jeather Cooper

ChatGPT and Other Chatbots
The arrival of ChatGPT sparked tons of new AI tools and changed the way we thought about using a chatbot in our daily lives.

Chatbots like ChatGPT, Perplexity, Claude, and Bing Chat can help content creators by quickly generating ideas, outlines, drafts, and full pieces of content, allowing creators to produce more high-quality content in less time.

These AI tools boost efficiency and creativity in content production across formats like blog posts, social captions, newsletters, and more.


Microsoft’s next Surface laptops will reportedly be its first true ‘AI PCs’ — from theverge.com by Emma Roth
Next year’s Surface Laptop 6 and Surface Pro 10 will feature Arm and Intel options, according to Windows Central.

Microsoft is getting ready to upgrade its Surface lineup with new AI-enabled features, according to a report from Windows Central. Unnamed sources told the outlet the upcoming Surface Pro 10 and Surface Laptop 6 will come with a next-gen neural processing unit (NPU), along with Intel and Arm-based options.


How one of the world’s oldest newspapers is using AI to reinvent journalism — from theguardian.com by Alexandra Topping
Berrow’s Worcester Journal is one of several papers owned by the UK’s second biggest regional news publisher to hire ‘AI-assisted’ reporters

With the AI-assisted reporter churning out bread and butter content, other reporters in the newsroom are freed up to go to court, meet a councillor for a coffee or attend a village fete, says the Worcester News editor, Stephanie Preece.

“AI can’t be at the scene of a crash, in court, in a council meeting, it can’t visit a grieving family or look somebody in the eye and tell that they’re lying. All it does is free up the reporters to do more of that,” she says. “Instead of shying away from it, or being scared of it, we are saying AI is here to stay – so how can we harness it?”



What to Expect in AI in 2024 — from hai.stanford.edu by
Seven Stanford HAI faculty and fellows predict the biggest stories for next year in artificial intelligence.

Topics include:

  • White Collar Work Shifts
  • Deepfake Proliferation
  • GPUs Shortage
  • More Helpful Agents
  • Hopes for U.S. Regulation
  • Asking Big Questions, Applying New Policies
  • Companies Will Navigate Complicated Regulations

Addendum on 1/2/24:


 

The rise of AI fake news is creating a ‘misinformation superspreader’ — from washingtonpost.com by Pranshu Verma
AI is making it easy for anyone to create propaganda outlets, producing content that can be hard to differentiate from real news

Artificial intelligence is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminates false information about elections, wars and natural disasters.

Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organization that tracks misinformation.

Historically, propaganda operations have relied on armies of low-paid workers or highly coordinated intelligence organizations to build sites that appear to be legitimate. But AI is making it easy for nearly anyone — whether they are part of a spy agency or just a teenager in their basement — to create these outlets, producing content that is at times hard to differentiate from real news.


AI, and everything else — from pitch.com by Benedict Evans


Chevy Chatbots Go Rogue — from
How a customer service chatbot made a splash on social media; write your holiday cards with AI

Their AI chatbot, designed to assist customers in their vehicle search, became a social media sensation for all the wrong reasons. One user even convinced the chatbot to agree to sell a 2024 Chevy Tahoe for just one dollar!

This story is exactly why AI implementation needs to be approached strategically. Learning to use AI, also means learning to build thinking of the guardrails and boundaries.

Here’s our tips.


Rite Aid used facial recognition on shoppers, fueling harassment, FTC says — from washingtonpost.com by Drew Harwell
A landmark settlement over the pharmacy chain’s use of the surveillance technology could raise further doubts about facial recognition’s use in stores, airports and other venues

The pharmacy chain Rite Aid misused facial recognition technology in a way that subjected shoppers to unfair searches and humiliation, the Federal Trade Commission said Tuesday, part of a landmark settlement that could raise questions about the technology’s use in stores, airports and other venues nationwide.

But the chain’s “reckless” failure to adopt safeguards, coupled with the technology’s long history of inaccurate matches and racial biases, ultimately led store employees to falsely accuse shoppers of theft, leading to “embarrassment, harassment, and other harm” in front of their family members, co-workers and friends, the FTC said in a statement.


 

Prompt engineering — from platform.openai.com

This guide shares strategies and tactics for getting better results from large language models (sometimes referred to as GPT models) like GPT-4. The methods described here can sometimes be deployed in combination for greater effect. We encourage experimentation to find the methods that work best for you.

Some of the examples demonstrated here currently work only with our most capable model, gpt-4. In general, if you find that a model fails at a task and a more capable model is available, it’s often worth trying again with the more capable model.

You can also explore example prompts which showcase what our models are capable of…


Preparedness — from openai.com

The study of frontier AI risks has fallen far short of what is possible and where we need to be. To address this gap and systematize our safety thinking, we are adopting the initial version of our Preparedness Framework. It describes OpenAI’s processes to track, evaluate, forecast, and protect against catastrophic risks posed by increasingly powerful models.


Every Major Tech Development From 2023 — from newsletter.thedailybite.co
The yearly tech round-up, Meta’s smart glasses upgrade, and more…

Here’s every major innovation from the last 365 days:

  • Microsoft: Launched additional OpenAI-powered features, including Copilot for Microsoft Dynamics 365 and Microsoft 365, enhancing business functionalities like text summarization, tone adjustment in emails, data insights, and automatic presentation creation.
  • Google: Introduced Duet, akin to Microsoft’s Copilot, integrating Gen AI across Google Workspace for writing assistance and custom visual creation. Also debuted Generative AI Studio, enabling developers to craft AI apps, and unveiled Gemini & Bard, a new AI technology with impressive features.
  • Salesforce: …
  • Adobe: …
  • Amazon Web Services (AWS): …
  • IBM:  …
  • Nvidia:  …
  • OpenAI:  …
  • Meta (Facebook):
  • Tencent:
  • Baidu:

News in chatbots — from theneurondaily.com by Noah Edelman & Pete Huang

Here’s what’s on the horizon:

  • Multimodal AI gets huge. Instead of just typing, more people will talk to AI, listen to it, create images, get visual feedback, create graphs, and more.
  • AI video gets really good. So far, AI videos have been cool-but-not-practical. They’re getting way better and we’re on the verge of seeing 100% AI-generated films, animations, and cartoons.
  • AI on our phones. Imagine Siri with the brains of ChatGPT-4 and the ambition of Alexa. TBD who pulls this off first!
  • GPT-5. ‘Nuff said.

20 Best AI Chatbots in 2024 — from eweek.com by Aminu Abdullahi
These leading AI chatbots use generative AI to offer a wide menu of functionality, from personalized customer service to improved information retrieval.

Top 20 Generative AI Chatbot Software: Comparison Chart
We compared the key features of the top generative AI chatbot software to help you determine the best option for your company…


What Google Gemini Teaches Us About Trust and The Future — from aiwithallie.beehiiv.com by Allie K. Miller
The AI demo may have been misleading, but it teaches us two huge lessons.

TL;DR (too long, didn’t read)

  1. We’re moving from ‘knowledge’ to ‘action’. 
    AI moving into proactive interventions.
  2. We’re getting more efficient. 
    Assume 2024 brings lower AI OpEx.
  3. It’s multi-modal from here on out. 
    Assume 2024 is multi-modal.
  4. There’s no one model to rule them all.
    Assume 2024 has more multi-model orchestration & delegation.

Stay curious, stay informed,
Allie


Chatbot Power Rankings — from theneurondaily.com by Noah Edelman

Here’s our power rankings of the best chatbots for (non-technical) work:

1: ChatGPT-4Unquestionably the smartest, with the strongest writing, coding, and reasoning abilities.

T1: Gemini Ultra—In theory as powerful as GPT-4. We won’t know for sure until it’s released in 2024.

2: Claude 2Top choice for managing lengthy PDFs (handles ~75,000 words), and rarely hallucinates. Can be somewhat stiff.

3: PerplexityIdeal for real-time information. Upgrading to Pro grants access to both Claude-2 and GPT-4.

T4: PiThe most “human-like” chatbot, though integrating with business data can be challenging.

T4: Bing ChatDelivers GPT-4-esque responses, has internet access, and can generate images. Bad UX and doesn’t support PDFs.

T4: BardNow powered by Gemini Pro, offers internet access and answer verification. Tends to hallucinate more frequently.

and others…


Midjourney + ChatGPT = Amazing AI Art — from theaigirl.substack.com by Diana Dovgopol and the Pycoach
Turn ChatGPT into a powerful Midjourney prompt machine with basic and advanced formulas.


Make music with AI — from aitestkitchen.withgoogle.com re: Music FX


 

 

Google hopes that this personalized AI -- called Notebook LM -- will help people with their note-taking, thinking, brainstorming, learning, and creating.

Google NotebookLM (experiment)

From DSC:
Google hopes that this personalized AI/app will help people with their note-taking, thinking, brainstorming, learning, and creating.

It reminds me of what Derek Bruff was just saying in regards to Top Hat’s Ace product being able to work with a much narrower set of information — i.e., a course — and to be almost like a personal learning assistant for the course you are taking. (As Derek mentions, this depends upon how extensively one uses the CMS/LMS in the first place.)

 

Introducing Gemini: our largest and most capable AI model — from blog.google by Sundar Pichai and Demis Hassabis
Making AI more helpful for everyone

Today, we’re a step closer to this vision as we introduce Gemini, the most capable and general model we’ve ever built.

Gemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research. It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image and video.



One year in: from ChatGPT3.5 to a whole new world — from stefanbauschard.substack.com by Stefan Bauschard
Happy Birthday to ChatGPT 3.5+. You’re growing up so fast!

So, in many ways, ChatGPT and its friends are far from as intelligent as a human; they do not have “general” intelligence (AGI).

But this will not last for long. The debate about ProjectQ aside, AIs with the ability to engage in high-level reasoning, plan, and have long-term memory are expected in the next 2–3 years. We are already seeing AI agents that are developing the ability to act autonomously and collaborate to a degree. Once AIs can reason and plan, acting autonomously and collaborating will not be a challenge.


ChatGPT is winning the future — but what future is that? — from theverge.com by David Pierce
OpenAI didn’t mean to kickstart a generational shift in the technology industry. But it did. Now all we have to decide is where to go from here.

We don’t know yet if AI will ultimately change the world the way the internet, social media, and the smartphone did. Those things weren’t just technological leaps — they actually reorganized our lives in fundamental and irreversible ways. If the final form of AI is “my computer writes some of my emails for me,” AI won’t make that list. But there are a lot of smart people and trillions of dollars betting that’s the beginning of the AI story, not the end. If they’re right, the day OpenAI launched its “research preview” of ChatGPT will be much more than a product launch for the ages. It’ll be the day the world changed, and we didn’t even see it coming.


AI is overhyped” — from theneurondaily.com by Pete Huang & Noah Edelman

If you’re feeling like AI is the future, but you’re not sure where to start, here’s our advice for 2024 based on our convos with business leaders:

  1. Start with problems – Map out where your business is spending time and money, then ask if AI can help. Don’t do AI to say you’re doing AI.
  2. Model the behavior – Teams do better in making use of new tools when their leadership buys in. Show them your support.
  3. Do what you can, wait for the rest – With AI evolving so fast, “do nothing for now” is totally valid. Start with what you can do today (accelerating individual employee output) and keep up-to-date on the rest.

Google says new AI model Gemini outperforms ChatGPT in most tests — from theguardian.com by Dan Milmo
Gemini is being released in form of upgrade to Google’s chatbot Bard, but not yet in UK or EU

Google has unveiled a new artificial intelligence model that it claims outperforms ChatGPT in most tests and displays “advanced reasoning” across multiple formats, including an ability to view and mark a student’s physics homework.

The model, called Gemini, is the first to be announced since last month’s global AI safety summit, at which tech firms agreed to collaborate with governments on testing advanced systems before and after their release. Google said it was in discussions with the UK’s newly formed AI Safety Institute over testing Gemini’s most powerful version, which will be released next year.

 

Expanding Bard’s understanding of YouTube videos — via AI Valley

  • What: We’re taking the first steps in Bard’s ability to understand YouTube videos. For example, if you’re looking for videos on how to make olive oil cake, you can now also ask how many eggs the recipe in the first video requires.
  • Why: We’ve heard you want deeper engagement with YouTube videos. So we’re expanding the YouTube Extension to understand some video content so you can have a richer conversation with Bard about it.

Reshaping the tree: rebuilding organizations for AI — from oneusefulthing.org by Ethan Mollick
Technological change brings organizational change.

I am not sure who said it first, but there are only two ways to react to exponential change: too early or too late. Today’s AIs are flawed and limited in many ways. While that restricts what AI can do, the capabilities of AI are increasing exponentially, both in terms of the models themselves and the tools these models can use. It might seem too early to consider changing an organization to accommodate AI, but I think that there is a strong possibility that it will quickly become too late.

From DSC:
Readers of this blog have seen the following graphic for several years now, but there is no question that we are in a time of exponential change. One would have had an increasingly hard time arguing the opposite of this perspective during that time.

 


 



Nvidia’s revenue triples as AI chip boom continues — from cnbc.com by Jordan Novet; via GSV

KEY POINTS

  • Nvidia’s results surpassed analysts’ projections for revenue and income in the fiscal fourth quarter.
  • Demand for Nvidia’s graphics processing units has been exceeding supply, thanks to the rise of generative artificial intelligence.
  • Nvidia announced the GH200 GPU during the quarter.

Here’s how the company did, compared to the consensus among analysts surveyed by LSEG, formerly known as Refinitiv:

  • Earnings: $4.02 per share, adjusted, vs. $3.37 per share expected
  • Revenue: $18.12 billion, vs. $16.18 billion expected

Nvidia’s revenue grew 206% year over year during the quarter ending Oct. 29, according to a statement. Net income, at $9.24 billion, or $3.71 per share, was up from $680 million, or 27 cents per share, in the same quarter a year ago.



 

From DSC:
The recent drama over at OpenAI reminds me of how important a few individuals are in influencing the lives of millions of people.

The C-Suites (i.e., the Chief Executive Officers, Chief Financial Officers, Chief Operating Officers, and the like) of companies like OpenAI, Alphabet (Google), Meta (Facebook), Microsoft, Netflix, NVIDIA, Amazon, Apple, and a handful of others have enormous power. Why? Because of the enormous power and reach of the technologies that they create, market, and provide.

We need to be praying for the hearts of those in the C-Suites of these powerful vendors — as well as for their Boards.

LORD, grant them wisdom and help mold their hearts and perspectives so that they truly care about others. May their decisions not be based on making money alone…or doing something just because they can.

What happens in their hearts and minds DOES and WILL continue to impact the rest of us. And we’re talking about real ramifications here. This isn’t pie-in-the-sky thinking or ideas. This is for real. With real consequences. If you doubt that, go ask the families of those whose sons and daughters took their own lives due to what happened out on social media platforms. Disclosure: I use LinkedIn and Twitter quite a bit. I’m not bashing these platforms per se. But my point is that there are real impacts due to a variety of technologies. What goes on in the hearts and minds of the leaders of these tech companies matters.


Some relevant items:

Navigating Attention-Driving Algorithms, Capturing the Premium of Proximity for Virtual Teams, & New AI Devices — from implactions.com by Scott Belsky

Excerpts (emphasis DSC):

No doubt, technology influences us in many ways we don’t fully understand. But one area where valid concerns run rampant is the attention-seeking algorithms powering the news and media we consume on modern platforms that efficiently polarize people. Perhaps we’ll call it The Law of Anger Expansion: When people are angry in the age of algorithms, they become MORE angry and LESS discriminate about who and what they are angry at.

Algorithms that optimize for grabbing attention, thanks to AI, ultimately drive polarization.

The AI learns quickly that a rational or “both sides” view is less likely to sustain your attention (so you won’t get many of those, which drives the sensation that more of the world agrees with you). But the rage-inducing stuff keeps us swiping.

Our feeds are being sourced in ways that dramatically change the content we’re exposed to.

And then these algorithms expand on these ultimately destructive emotions – “If you’re afraid of this, maybe you should also be afraid of this” or “If you hate those people, maybe you should also hate these people.”

How do we know when we’ve been polarized? This is the most important question of the day.

Whatever is inflaming you is likely an algorithm-driven expansion of anger and an imbalance of context.


 

 

Nearly half of CEOs believe that AI not only could—but should—replace their own jobs — from finance.yahoo.com by Orianna Rosa Royle; via Harsh Makadia

Researchers from edX, an education platform for upskilling workers, conducted a survey involving over 1,500 executives and knowledge workers. The findings revealed that nearly half of CEOs believe AI could potentially replace “most” or even all aspects of their own positions.

What’s even more intriguing is that 47% of the surveyed executives not only see the possibility of AI taking over their roles but also view it as a desirable development.

Why? Because they anticipate that AI could rekindle the need for traditional leadership for those who remain.

“Success in the CEO role hinges on effective leadership, and AI can liberate time for this crucial aspect of their role,” Andy Morgan, Head of edX for Business comments on the findings.

“CEOs understand that time saved on routine tasks can stimulate innovation, nurture creativity, and facilitate essential upskilling for their teams, fostering both individual and organizational success,” he adds.

But CEOs already know this: EdX’s research echoed that 79% of executives fear that if they don’t learn how to use AI, they’ll be unprepared for the future of work.

From DSC:
By the way, my first knee-jerk reaction to this was:

WHAT?!?!?!? And this from people who earn WAAAAY more than the average employee, no doubt.

After a chance to calm down a bit, I see that the article does say that CEOs aren’t going anywhere. Ah…ok…got it.


Strange Ways AI Disrupts Business Models, What’s Next For Creativity & Marketing, Some Provocative Data — from .implications.com by Scott Belsky
In this edition, we explore some of the more peculiar ways that AI may change business models as well as recent releases for the world of creativity and marketing.

Time-based business models are liable for disruption via a value-based overhaul of compensation. Today, as most designers, lawyers, and many trades in between continue to charge by the hour, the AL-powered step-function improvements in workflows are liable to shake things up.

In such a world, time-based billing simply won’t work anymore unless the value derived from these services is also compressed by a multiple (unlikely). The classic time-based model of billing for lawyers, designers, consultants, freelancers etc is officially antiquated. So, how might the value be captured in a future where we no longer bill by the hour? …

The worlds of creativity and marketing are rapidly changing – and rapidly coming together

#AI #businessmodels #lawyers #billablehour

It becomes clear that just prompting to get images is a rather elementary use case of AI, compared to the ability to place and move objects, change perspective, adjust lighting, and many other actions using AI.



AlphaFold DB provides open access to over 200 million protein structure predictions to accelerate scientific research. — from

AlphaFold is an AI system developed by DeepMind that predicts a protein’s 3D structure from its amino acid sequence. It regularly achieves accuracy competitive with experiment.


After 25 years of growth for the $68 billion SEO industry, here’s how Google and other tech firms could render it extinct with AI — from fortune.com by Ravi Sen and The Conversation

But one other consequence is that I believe it may destroy the $68 billion search engine optimization industry that companies like Google helped create.

For the past 25 years or so, websites, news outlets, blogs and many others with a URL that wanted to get attention have used search engine optimization, or SEO, to “convince” search engines to share their content as high as possible in the results they provide to readers. This has helped drive traffic to their sites and has also spawned an industry of consultants and marketers who advise on how best to do that.

As an associate professor of information and operations management, I study the economics of e-commerce. I believe the growing use of generative AI will likely make all of that obsolete.


ChatGPT Plus members can upload and analyze files in the latest beta — from theverge.com by Wes Davis
ChatGPT Plus members can also use modes like Browse with Bing without manually switching, letting the chatbot decide when to use them.

OpenAI is rolling out new beta features for ChatGPT Plus members right now. Subscribers have reported that the update includes the ability to upload files and work with them, as well as multimodal support. Basically, users won’t have to select modes like Browse with Bing from the GPT-4 dropdown — it will instead guess what they want based on context.


Google agrees to invest up to $2 billion in OpenAI rival Anthropic — from reuters.com by Krystal Hu

Oct 27 (Reuters) – Alphabet’s (GOOGL.O) Google has agreed to invest up to $2 billion in the artificial intelligence company Anthropic, a spokesperson for the startup said on Friday.

The company has invested $500 million upfront into the OpenAI rival and agreed to add $1.5 billion more over time, the spokesperson said.

Google is already an investor in Anthropic, and the fresh investment would underscore a ramp-up in its efforts to better compete with Microsoft (MSFT.O), a major backer of ChatGPT creator OpenAI, as Big Tech companies race to infuse AI into their applications.


 

 

60+ Ideas for ChatGPT Assignments — from stars.library.ucf.edu by Kevin Yee, Kirby Whittington, Erin Doggette, and Laurie Uttich

60+ ideas for using ChatGPT in your assignments today


Artificial intelligence is disrupting higher education — from itweb.co.za by Rennie Naidoo; via GSV
Traditional contact universities need to adapt faster and find creative ways of exploring and exploiting AI, or lose their dominant position.

Higher education professionals have a responsibility to shape AI as a force for good.


Introducing Canva’s biggest education launch — from canva.com
We’re thrilled to unveil our biggest education product launch ever. Today, we’re introducing a whole new suite of products that turn Canva into the all-in-one classroom tool educators have been waiting for.

Also see Canva for Education.
Create and personalize lesson plans, infographics,
posters, video, and more. 
100% free for
teachers and students at eligible schools.


ChatGPT and generative AI: 25 applications to support student engagement — from timeshighereducation.com by Seb Dianati and Suman Laudari
In the fourth part of their series looking at 100 ways to use ChatGPT in higher education, Seb Dianati and Suman Laudari share 25 prompts for the AI tool to boost student engagement


There are two ways to use ChatGPT — from theneurondaily.com

  1. Type to it.
  2. Talk to it (new).


Since then, we’ve looked to it for a variety of real-world business advice. For example, Prof Ethan Mollick posted a great guide using ChatGPT-4 with voice as a negotiation instructor.

In a similar fashion, you can consult ChatGPT with voice for feedback on:

  • Job interviews.
  • Team meetings.
  • Business presentations.



Via The Rundown: Google is using AI to analyze the company’s Maps data and suggest adjustments to traffic light timing — aiming to cut driver waits, stops, and emissions.


Google Pixel’s face-altering photo tool sparks AI manipulation debate — from bbc.com by Darren Waters

The camera never lies. Except, of course, it does – and seemingly more often with each passing day.
In the age of the smartphone, digital edits on the fly to improve photos have become commonplace, from boosting colours to tweaking light levels.

Now, a new breed of smartphone tools powered by artificial intelligence (AI) are adding to the debate about what it means to photograph reality.

Google’s latest smartphones released last week, the Pixel 8 and Pixel 8 Pro, go a step further than devices from other companies. They are using AI to help alter people’s expressions in photographs.



From Digital Native to AI-Empowered: Learning in the Age of Artificial Intelligence — from campustechnology.com by Kim Round
The upcoming generation of learners will enter higher education empowered by AI. How can institutions best serve these learners and prepare them for the workplace of the future?

Dr. Chris Dede, of Harvard University and Co-PI of the National AI Institute for Adult Learning and Online Education, spoke about the differences between knowledge and wisdom in AI-human interactions in a keynote address at the 2022 Empowering Learners for the Age of AI conference. He drew a parallel between Star Trek: The Next Generation characters Data and Picard during complex problem-solving: While Data offers the knowledge and information, Captain Picard offers the wisdom and context from on a leadership mantle, and determines its relevance, timing, and application.


The Near-term Impact of Generative AI on Education, in One Sentence — from opencontent.org by David Wiley

This “decreasing obstacles” framing turned out to be helpful in thinking about generative AI. When the time came, my answer to the panel question, “how would you summarize the impact generative AI is going to have on education?” was this:

“Generative AI greatly reduces the degree to which access to expertise is an obstacle to education.”

We haven’t even started to unpack the implications of this notion yet, but hopefully just naming it will give the conversation focus, give people something to disagree with, and help the conversation progress more quickly.


How to Make an AI-Generated Film — from heatherbcooper.substack.com by Heather Cooper
Plus, Midjourney finally has a new upscale tool!


Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning — from blogs.nvidia.com by Angie Lee
AI agent uses LLMs to automatically generate reward algorithms to train robots to accomplish complex tasks.

From DSC:
I’m not excited about this, as I can’t help but wonder…how long before the militaries of the world introduce this into their warfare schemes and strategies?


The 93 Questions Schools Should Ask About AI — from edweek.org by Alyson Klein

The toolkit recommends schools consider:

  • Purpose: How can AI help achieve educational goals?
  • Compliance: How does AI fit with existing policies?
  • Knowledge: How can schools advance AI Literacy?
  • Balance: What are the benefits and risks of AI?
  • Integrity: How does AI fit into policies on things like cheating?
  • Agency: How can humans stay in the loop on AI?
  • Evaluation: How can schools regularly assess the impact of AI?
 

Thinking with Colleagues: AI in Education — from campustechnology.com by Mary Grush
A Q&A with Ellen Wagner

Wagner herself recently relied on the power of collegial conversations to probe the question: What’s on the minds of educators as they make ready for the growing influence of AI in higher education? CT asked her for some takeaways from the process.

We are in the very early days of seeing how AI is going to affect education. Some of us are going to need to stay focused on the basic research to test hypotheses. Others are going to dive into laboratory “sandboxes” to see if we can build some new applications and tools for ourselves. Still others will continue to scan newsletters like ProductHunt every day to see what kinds of things people are working on. It’s going to be hard to keep up, to filter out the noise on our own. That’s one reason why thinking with colleagues is so very important.

Mary and Ellen linked to “What Is Top of Mind for Higher Education Leaders about AI?” — from northcoasteduvisory.com. Below are some excerpts from those notes:

We are interested how K-12 education will change in terms of foundational learning. With in-class, active learning designs, will younger students do a lot more intensive building of foundational writing and critical thinking skills before they get to college?

  1. The Human in the Loop: AI is built using math: think of applied statistics on steroids. Humans will be needed more than ever to manage, review and evaluate the validity and reliability of results. Curation will be essential.
  2. We will need to generate ideas about how to address AI factors such as privacy, equity, bias, copyright, intellectual property, accessibility, and scalability.
  3. Have other institutions experimented with AI detection and/or have held off on emerging tools related to this? We have just recently adjusted guidance and paused some tools related to this given the massive inaccuracies in detection (and related downstream issues in faculty-elevated conduct cases)

Even though we learn repeatedly that innovation has a lot to do with effective project management and a solid message that helps people understand what they can do to implement change, people really need innovation to be more exciting and visionary than that.  This is the place where we all need to help each other stay the course of change. 


Along these lines, also see:


What people ask me most. Also, some answers. — from oneusefulthing.org by Ethan Mollick
A FAQ of sorts

I have been talking to a lot of people about Generative AI, from teachers to business executives to artists to people actually building LLMs. In these conversations, a few key questions and themes keep coming up over and over again. Many of those questions are more informed by viral news articles about AI than about the real thing, so I thought I would try to answer a few of the most common, to the best of my ability.

I can’t blame people for asking because, for whatever reason, the companies actually building and releasing Large Language Models often seem allergic to providing any sort of documentation or tutorial besides technical notes. I was given much better documentation for the generic garden hose I bought on Amazon than for the immensely powerful AI tools being released by the world’s largest companies. So, it is no surprise that rumor has been the way that people learn about AI capabilities.

Currently, there are only really three AIs to consider: (1) OpenAI’s GPT-4 (which you can get access to with a Plus subscription or via Microsoft Bing in creative mode, for free), (2) Google’s Bard (free), or (3) Anthropic’s Claude 2 (free, but paid mode gets you faster access). As of today, GPT-4 is the clear leader, Claude 2 is second best (but can handle longer documents), and Google trails, but that will likely change very soon when Google updates its model, which is rumored to be happening in the near future.

 

Everyday Media Literacy: An Analog Guide for Your Digital Life — from routledge.com by Sue Ellen Christian

In this second edition, award-winning educator Sue Ellen Christian offers students an accessible and informed guide to how they can consume and create media intentionally and critically.

The textbook applies media literacy principles and critical thinking to the key issues facing young adults today, from analyzing and creating media messages to verifying information and understanding online privacy. Through discussion prompts, writing exercises, key terms, and links, readers are provided with a framework from which to critically consume and create media in their everyday lives. This new edition includes updates covering privacy aspects of AI, VR and the metaverse, and a new chapter on digital audiences, gaming, and the creative and often unpaid labor of social media and influencers. Chapters examine news literacy, online activism, digital inequality, social media and identity, and global media corporations, giving readers a nuanced understanding of the key concepts at the core of media literacy. Concise, creative, and curated, this book highlights the cultural, political, and economic dynamics of media in contemporary society, and how consumers can mindfully navigate their daily media use.

This textbook is perfect for students and educators of media literacy, journalism, and education looking to build their understanding in an engaging way.

 

180 Degree Turn: NYC District Goes From Banning ChatGPT to Exploring AI’s Potential — from edweek.org by Alyson Klein (behind paywall)

New York City Public Schools will launch an Artificial Intelligence Policy Lab to guide the nation’s largest school district’s approach to this rapidly evolving technology.


The Leader’s Blindspot: How to Prepare for the Real Future — from preview.mailerlite.io by the AIEducator
The Commonly Held Belief: AI Will Automate Only Boring, Repetitive Tasks First

The Days of Task-Based Views on AI Are Numbered
The winds of change are sweeping across the educational landscape (emphasis DSC):

  1. Multifaceted AI: AI technologies are not one-trick ponies; they are evolving into complex systems that can handle a variety of tasks.
  2. Rising Expectations: As technology becomes integral to our lives, the expectations for personalised, efficient education are soaring.
  3. Skill Transformation: Future job markets will demand a different skill set, one that is symbiotic with AI capabilities.

Teaching: How to help students better understand generative AI — from chronicle.com by Beth McMurtrie
Beth describes ways professors have used ChatGPT to bolster critical thinking in writing-intensive courses

Kevin McCullen, an associate professor of computer science at the State University of New York at Plattsburgh, teaches a freshman seminar about AI and robotics. As part of the course, students read Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, by John Markoff. McCullen had the students work in groups to outline and summarize the first three chapters. Then he showed them what ChatGPT had produced in an outline.

“Their version and ChatGPT’s version seemed to be from two different books,” McCullen wrote. “ChatGPT’s version was essentially a ‘laundry list’ of events. Their version was narratives of what they found interesting. The students had focused on what the story was telling them, while ChatGPT focused on who did what in what year.” The chatbot also introduced false information, such as wrong chapter names.

The students, he wrote, found the writing “soulless.”


7 Questions with Dr. Cristi Ford, VP of Academic Affairs at D2L — from campustechnology.com by Rhea Kelly

In the Wild West of generative AI, educators and institutions are working out how best to use the technology for learning. How can institutions define AI guidelines that allow for experimentation while providing students with consistent guidance on appropriate use of AI tools?

To find out, we spoke with Dr. Cristi Ford, vice president of academic affairs at D2L. With more than two decades of educational experience in nonprofit, higher education, and K-12 institutions, Ford works with D2L’s institutional partners to elevate best practices in teaching, learning, and student support. Here, she shares her advice on setting and communicating AI policies that are consistent and future-ready.


AI Platform Built by Teachers, for Teachers, Class Companion Raises $4 Million to Tap Into the Power of Practice — from prweb.com

“If we want to use AI to improve education, we need more teachers at the table,” said Avery Pan, Class Companion co-founder and CEO. “Class Companion is designed by teachers, for teachers, to harness the most sophisticated AI and improve their classroom experience. Developing technologies specifically for teachers is imperative to supporting our next generation of students and education system.”


7 Questions on Generative AI in Learning Design — from campustechnology.com by Rhea Kelly
Open LMS Adoption and Education Specialist Michael Vaughn on the challenges and possibilities of using artificial intelligence to move teaching and learning forward.

The potential for artificial intelligence tools to speed up course design could be an attractive prospect for overworked faculty and spread-thin instructional designers. Generative AI can shine, for example, in tasks such as reworking assessment question sets, writing course outlines and learning objectives, and generating subtitles for audio and video clips. The key, says Michael Vaughn, adoption and education specialist at learning platform Open LMS, is treating AI like an intern who can be guided and molded along the way, and whose work is then vetted by a human expert.

We spoke with Vaughn about how best to utilize generative AI in learning design, ethical issues to consider, and how to formulate an institution-wide policy that can guide AI use today and in the future.


10 Ways Technology Leaders Can Step Up and Into the Generative AI Discussion in Higher Ed — from er.educause.edu by Lance Eaton and Stan Waddell

  1. Offer Short Primers on Generative AI
  2. Explain How to Get Started
  3. Suggest Best Practices for Engaging with Generative AI
  4. Give Recommendations for Different Groups
  5. Recommend Tools
  6. Explain the Closed vs. Open-Source Divide
  7. Avoid Pitfalls
  8. Conduct Workshops and Events
  9. Spot the Fake
  10. Provide Proper Guidance on the Limitations of AI Detectors


 


The next phase of digital whiteboarding for Google Workspace— from workspaceupdates.googleblog.com

What’s changing

In late 2024, we will wind down the Jamboard whiteboarding app as well as continue with the previously planned end of support for Google Jamboard devices. For those who are impacted by this change, we are committed to help you transition:

    • We are integrating whiteboard tools such as FigJam, Lucidspark, and Miro across Google Workspace so you can include them when collaborating in Meet, sharing content in Drive, or scheduling in Calendar.

The Teacher’s Guide for Transitioning from Jamboard to FigJam — from tommullaney.com by Tom Mullaney


 
© 2025 | Daniel Christian