Khan Academy and Microsoft partner to expand access to AI tools that personalize teaching and help make learning fun — from news.microsoft.com

[On 5/21/24] at Microsoft Build, Microsoft and Khan Academy announced a new partnership that aims to bring these time-saving and lesson-enhancing AI tools to millions of educators. By donating access to Azure AI-optimized infrastructure, Microsoft is enabling Khan Academy to offer all K-12 educators in the U.S. free access to the pilot of Khanmigo for Teachers, which will now be powered by Azure OpenAI Service.

The two companies will also collaborate to explore opportunities to improve AI tools for math tutoring in an affordable, scalable and adaptable way with a new version of Phi-3, a family of small language models (SLMs) developed by Microsoft.

 

Also see/referenced:

Khanmigo -- a free, AI-powered teaching assistant


Also relevant/see:

Khan Academy and Microsoft are teaming up to give teachers a free AI assistant — from fastcompany.com by Steven Melendez
AI assistant Khanmigo can help time-strapped teachers come up with lesson ideas and test questions, the companies say.

Khan Academy’s AI assistant, Khanmigo, has earned praise for helping students to understand and practice everything from math to English, but it can also help teachers devise lesson plans, formulate questions about assigned readings, and even generate reading passages appropriate for students at different levels. More than just a chatbot, the software offers specific AI-powered tools for generating quizzes and assignment instructions, drafting lesson plans, and formulating letters of recommendation.

Having a virtual teaching assistant is especially valuable in light of recent research from the RAND Corporation that found teachers work longer hours than most working adults, which includes administrative and prep work outside the classroom.

 

Introducing Copilot+ PCs — from blogs.microsoft.com

[On May 20th], at a special event on our new Microsoft campus, we introduced the world to a new category of Windows PCs designed for AI, Copilot+ PCs.

Copilot+ PCs are the fastest, most intelligent Windows PCs ever built. With powerful new silicon capable of an incredible 40+ TOPS (trillion operations per second), all–day battery life and access to the most advanced AI models, Copilot+ PCs will enable you to do things you can’t on any other PC. Easily find and remember what you have seen in your PC with Recall, generate and refine AI images in near real-time directly on the device using Cocreator, and bridge language barriers with Live Captions, translating audio from 40+ languages into English.

From DSC:
As a first off-the-hip look, Recall could be fraught with possible security/privacy-related issues. But what do I know? The Neuron states “Microsoft assures that everything Recall sees remains private.” Ok…


From The Rundown AI concerning the above announcements:

The details:

  • A new system enables Copilot+ PCs to run AI workloads up to 20x faster and 100x more efficiently than traditional PCs.
    Windows 11 has been rearchitected specifically for AI, integrating the Copilot assistant directly into the OS.
  • New AI experiences include a new feature called Recall, which allows users to search for anything they’ve seen on their screen with natural language.
  • Copilot’s new screen-sharing feature allows AI to watch, hear, and understand what a user is doing on their computer and answer questions in real-time.
  • Copilot+ PCs will start at $999, and ship with OpenAI’s latest GPT-4o models.

Why it matters: Tony Stark’s all-powerful JARVIS AI assistant is getting closer to reality every day. Once Copilot, ChatGPT, Project Astra, or anyone else can not only respond but start executing tasks autonomously, things will start getting really exciting — and likely initiate a whole new era of tech work.


 


Microsoft’s new ChatGPT competitor… — from The Rundown AI

The Rundown: Microsoft is reportedly developing a massive 500B parameter in-house LLM called MAI-1, aiming to compete with top AI models from OpenAI, Anthropic, and Google.


2024 | The AI Founder Report | Business Impact, Use cases, & Tools — from Hampton; via The Neuron

Hampton runs a private community for high-growth tech founders and CEOs. We asked our community of founders and owners how AI has impacted their business and what tools they use

Here’s a sneak peek of what’s inside:

  • The budgets they set aside for AI research and development
  • The most common (and obscure) tools founders are using
  • Measurable business impacts founders have seen through using AI
  • Where they are purposefully not using AI and much more

2024 Work Trend Index Annual Report from Microsoft and LinkedIn
AI at Work Is Here. Now Comes the Hard Part Employees want AI, leaders are looking for a path forward.

Also relevant, see Microsoft’s web page on this effort:

To help leaders and organizations overcome AI inertia, Microsoft and LinkedIn looked at how AI will reshape work and the labor market broadly, surveying 31,000 people across 31 countries, identifying labor and hiring trends from LinkedIn, and analyzing trillions of Microsoft 365 productivity signals as well as research with Fortune 500 customers. The data points to insights every leader and professional needs to know—and actions they can take—when it comes to AI’s implications for work.

 

The Verge | What’s Next With AI | February 2024 | Consumer Survey

 

 

 

 

 

 




Microsoft AI creates talking deepfakes from single photo — from inavateonthenet.net


The Great Hall – where now with AI? It is not ‘Human Connection V Innovative Technology’ but ‘Human Connection + Innovative Technology’ — from donaldclarkplanb.blogspot.com by Donald Clark

The theme of the day was Human Connection V Innovative Technology. I see this a lot at conferences, setting up the human connection (social) against the machine (AI). I think this is ALL wrong. It is, and has always been a dialectic, human connection (social) PLUS the machine. Everyone had a smartphone, most use it for work, comms and social media. The binary between human and tech has long disappeared. 


Techno-Social Engineering: Why the Future May Not Be Human, TikTok’s Powerful ForYou Algorithm, & More — from by Misha Da Vinci

Things to consider as you dive into this edition:

  • As we increasingly depend on technology, how is it changing us?
  • In the interaction between humans and technology, who is adapting to whom?
  • Is the technology being built for humans, or are we being changed to fit into tech systems?
  • As time passes, will we become more like robots or the AI models we use?
  • Over the next 30 years, as we increasingly interact with technology, who or what will we become?

 

Description:

I recently created an AI version of myself—REID AI—and recorded a Q&A to see how this digital twin might challenge me in new ways. The video avatar is generated by Hour One, its voice was created by Eleven Labs, and its persona—the way that REID AI formulates responses—is generated from a custom chatbot built on GPT-4 that was trained on my books, speeches, podcasts and other content that I’ve produced over the last few decades. I decided to interview it to test its capability and how closely its responses match—and test—my thinking. Then, REID AI asked me some questions on AI and technology. I thought I would hate this, but I’ve actually ended up finding the whole experience interesting and thought-provoking.


From DSC:
This ability to ask questions of a digital twin is very interesting when you think about it in terms of “interviewing” a historical figure. I believe character.ai provides this kind of thing, but I haven’t used it much.


 

AI RESOURCES AND TEACHING (Kent State University) — from aiadvisoryboards.wordpress.com

AI Resources and Teaching | Kent State University offers valuable resources for educators interested in incorporating artificial intelligence (AI) into their teaching practices. The university recognizes that the rapid emergence of AI tools presents both challenges and opportunities in higher education.

The AI Resources and Teaching page provides educators with information and guidance on various AI tools and their responsible use within and beyond the classroom. The page covers different areas of AI application, including language generation, visuals, videos, music, information extraction, quantitative analysis, and AI syllabus language examples.


A Cautionary AI Tale: Why IBM’s Dazzling Watson Supercomputer Made a Lousy Tutor — from the74million.org by Greg Toppo
With a new race underway to create the next teaching chatbot, IBM’s abandoned 5-year, $100M ed push offers lessons about AI’s promise and its limits.

For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn’t engage or motivate kids, inspire them to reach new heights or even keep them focused on the material — all qualities of the best mentors.

It’s a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. “There are some things AI is actually very good for,” Nitta said, “but it’s not great as a replacement for humans.”

His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.

To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.” 

Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”

From DSC:
This is why the vision that I’ve been tracking and working on has always said that HUMAN BEINGS will be necessary — they are key to realizing this vision. Along these lines, here’s a relevant quote:

Another crucial component of a new learning theory for the age of AI would be the cultivation of “blended intelligence.” This concept recognizes that the future of learning and work will involve the seamless integration of human and machine capabilities, and that learners must develop the skills and strategies needed to effectively collaborate with AI systems. Rather than viewing AI as a threat to human intelligence, a blended intelligence approach seeks to harness the complementary strengths of humans and machines, creating a symbiotic relationship that enhances the potential of both.

Per Alexander “Sasha” Sidorkin, Head of the National Institute on AI in Society at California State University Sacramento.

 

Amid explosive demand, America is running out of power — from washingtonpost.com by Evan Halper
AI and the boom in clean-tech manufacturing are pushing America’s power grid to the brink. Utilities can’t keep up.

Vast swaths of the United States are at risk of running short of power as electricity-hungry data centers and clean-technology factories proliferate around the country, leaving utilities and regulators grasping for credible plans to expand the nation’s creaking power grid.

A major factor behind the skyrocketing demand is the rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure that require exponentially more power than traditional data centers. AI is also part of a huge scale-up of cloud computing. Tech firms like Amazon, Apple, Google, Meta and Microsoft are scouring the nation for sites for new data centers, and many lesser-known firms are also on the hunt.


The Obscene Energy Demands of A.I. — from newyorker.com by Elizabeth Kolbert
How can the world reach net zero if it keeps inventing new ways to consume energy?

“There’s a fundamental mismatch between this technology and environmental sustainability,” de Vries said. Recently, the world’s most prominent A.I. cheerleader, Sam Altman, the C.E.O. of OpenAI, voiced similar concerns, albeit with a different spin. “I think we still don’t appreciate the energy needs of this technology,” Altman said at a public appearance in Davos. He didn’t see how these needs could be met, he went on, “without a breakthrough.” He added, “We need fusion or we need, like, radically cheaper solar plus storage, or something, at massive scale—like, a scale that no one is really planning for.”


A generative AI reset: Rewiring to turn potential into value in 2024 — from mckinsey.com by Eric Lamarre, Alex Singla, Alexander Sukharevsky, and Rodney Zemmel; via Philippa Hardman
The generative AI payoff may only come when companies do deeper organizational surgery on their business.

  • Figure out where gen AI copilots can give you a real competitive advantage
  • Upskill the talent you have but be clear about the gen-AI-specific skills you need
  • Form a centralized team to establish standards that enable responsible scaling
  • Set up the technology architecture to scale
  • Ensure data quality and focus on unstructured data to fuel your models
  • Build trust and reusability to drive adoption and scale

AI Prompt Engineering Is Dead Long live AI prompt engineering — from spectrum.ieee.org

Since ChatGPT dropped in the fall of 2022, everyone and their donkey has tried their hand at prompt engineering—finding a clever way to phrase your query to a large language model (LLM) or AI art or video generator to get the best results or sidestep protections. The Internet is replete with prompt-engineering guides, cheat sheets, and advice threads to help you get the most out of an LLM.

However, new research suggests that prompt engineering is best done by the model itself, and not by a human engineer. This has cast doubt on prompt engineering’s future—and increased suspicions that a fair portion of prompt-engineering jobs may be a passing fad, at least as the field is currently imagined.


What the birth of the spreadsheet teaches us about generative AI — from timharford.com by Tim Harford; via Sam DeBrule

There is one very clear parallel between the digital spreadsheet and generative AI: both are computer apps that collapse time. A task that might have taken hours or days can suddenly be completed in seconds. So accept for a moment the premise that the digital spreadsheet has something to teach us about generative AI. What lessons should we absorb?

It’s that pace of change that gives me pause. Ethan Mollick, author of the forthcoming book Co-Intelligence, tells me “if progress on generative AI stops now, the spreadsheet is not a bad analogy”. We’d get some dramatic shifts in the workplace, a technology that broadly empowers workers and creates good new jobs, and everything would be fine. But is it going to stop any time soon? Mollick doubts that, and so do I.


 

 

Text to video via OpenAI’s Sora. (I had taken this screenshot on the 15th, but am posting it now.)

We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.

Introducing Sora, our text-to-video model. Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.

Along these lines, also see:

Pika; via Superhuman AI



An Ivy League school just announced its first AI degree — from qz.com by Michelle Cheng; via Barbara Anna Zielonka on LinkedIn
It’s a sign of the times. At the same time, AI talent is scarce

At the University of Pennsylvania, undergraduate students in its school of engineering will soon be able to study for a bachelor of science degree in artificial intelligence.

What can one do with an AI degree? The University of Pennsylvania says students will be able to apply the skills they learn in school to build responsible AI tools, develop materials for emerging chips and hardware, and create AI-driven breakthroughs in healthcare through new antibiotics, among other things.



Google Pumps $27 Million Into AI Training After Microsoft Pledge—Here’s What To Know — from forbes.com by Robert Hart

Google on Monday announced plans to help train people in Europe with skills in artificial intelligence, the latest tech giant to invest in preparing workers and economies amid the disruption brought on by technologies they are racing to develop.


The Exhausting Pace of AI: Google’s Ultra Leap — from marcwatkins.substack.com by Marc Watkins

The acceleration of AI deployments has gotten so absurdly out of hand that a draft post I started a week ago about a new development is now out of date.

The Pace is Out of Control
A mere week since Ultra 1.0’s announcement, Google has now introduced us to Ultra 1.5, a model they are clearly positioning to be the leader in the field. Here is the full technical report for Gemini Ultra 1.5, and what it can do is stunning.

 

 

 


Maryville Announces $21 Million Investment in AI and New Technologies Amidst Record Growth — from maryville.edu; via Arthur “Art” Fredrich on LinkedIn

[St. Louis, MO, February 14, 2024] – In a bold move that counters the conventions of more traditional schools, Maryville University has unveiled a substantial $21 million multi-year investment in artificial intelligence (AI) and cutting-edge technologies. This groundbreaking initiative is set to transform the higher education experience to be powered by the latest technology to support student success and a five-star experience for thousands of students both on-campus and online.

 

 

Conversational & Experiential: The New Duality of Learning — from learningguild.com by Markus Bernhardt

Excerpt (emphasis DSC):

The future of corporate learning and development (L&D) is being profoundly reshaped by the progress we are witnessing in artificial intelligence (AI). The increasing availability of new technologies and tools is causing L&D leaders and their teams to rethink their strategy and processes, and even their team structure. The resulting shift, already gaining momentum, will soon move us toward a future where learning experiences are deeply personal, interactive, and contextually rich.

The technological advancements at the forefront of this revolution:

  • Allow us to create high-quality content faster and at a fraction of the cost previously experienced.
  • Provide us with a range of new modalities of delivery, such as chat interfaces, as well as immersive and experiential simulations and games.
  • Enable us to transform learning and training more and more into a journey uniquely tailored to each individual’s learning path, strengths, weaknesses, and confidence levels.

We are already seeing signs of the immediate future—one where AI will adapt not only content but the entire learner experience, on-the-fly and aligned with the needs and requirements of the learner at a specific moment of need.


Harnessing AI in L&D: Reviewing 2023 & Imagining the Future — from learningguild.com by Juan Naranjo

Excerpt (emphasis DSC):

AI-assisted design & development work: A dramatic shift
This prediction was right. There has been a seismic shift in instructional design, and the role is evolving toward content curation, editing, and resource orchestration. Critical thinking skills are becoming more important than ever to make sure that the final learning asset is accurate. All of this is happening thanks to AI tools like:

  • Adobe Firefly…
  • ChatGPT…
  • Another tool, one that isn’t usually part of the L&D ecosystem, is Microsoft’s Azure AI Services…

Early estimates indicate these improvements save between 30 percent and 60 percent of development time.

As a reminder, meta-learning, in this context, refers to tools that serve up experiences to learners based on their preferences, needs, and goals. It is the superstructure behind the content assets (e.g., programs, courses, articles, videos, etc.) that assembles everything into a coherent, and purposeful, body of knowledge for the users.

 

Your guide to AI: February 2024 — from nathanbenaich.substack.com by Nathan Benaich & Alex Chalmers

Areas covered include:

  • Policy
  • The (geo)politics of AI
  • Hardware
  • Big tech start-ups
  • Research
  • Startups
  • Exits

=



Text-to-Video with Google’s Lumiere



Amazon announces Rufus, a new generative AI-powered conversational shopping experience — from aboutamazon.com by Rajiv Mehta

Rufus is an expert shopping assistant trained on Amazon’s product catalog and information from across the web to answer customer questions on shopping needs, products, and comparisons, make recommendations based on this context, and facilitate product discovery in the same Amazon shopping experience customers use regularly.

Launching [2/1/24] in beta to a small subset of customers in Amazon’s mobile app, Rufus will progressively roll out to additional U.S. customers in the coming weeks.

 

Unlocking productivity and personalizing learning with AI — from educationblog.microsoft.com by Microsoft Education Team

Today, we’re announcing the next wave of AI innovations from Microsoft Education that will help unlock productivity and personalize learning. This includes expanded Copilot for Microsoft 365 availability and Loop coming to education. We’re also sharing news about AI built for education such as Reading Coach and features designed to free up time for educators and personalize learning. As part of our continued work to build AI literacy, we’ve launched our latest course for educators and a new learning path on Microsoft Learn. And earlier this week we outlined Microsoft’s position and themes for policymakers to consider around advancing youth online safety and wellness.

With the latest AI technology, we have an opportunity to provide learners with personalized, engaging, and transformative reading experiences. Reading Coach, a Learning Accelerator now powered by generative AI, does just that. You can sign up for a preview of Reading Coach today and try it for yourself at coach.microsoft.com.


Recap: Winter AI Institute for Teachers — from umcetl.substack.com

Last week, CETL partnered with the Department of Writing and Rhetoric to offer a second iteration of the AI Institute for Teachers to an audience of UM instructors from across disciplines. Nearly 60 faculty from 26 different departments and schools attended the three-day event. In a wide variety of interactive sessions designed by Institute leader Marc Watkins, participants examined the impact of generative AI on teaching and learning, working in small groups to consider how to approach AI in their own disciplines.

If you’re not a UM faculty member or couldn’t attend the sessions, we have good news! All the materials from the Institute are publicly available at the following links:

And we’ve written a short recap of the Institute here.


Learn with AI from U Maine

Learn with AI — from the University of Maine

Rather than try to ban this technology from classrooms outright, the Learning With AI project asks if this moment offers an opportunity to introduce students to the ethical and economic questions wreaked by these new tools, as well as to experiment with progressive forms of pedagogy that can exploit them.

 

Learners’ Edition: AI-powered Coaching, Professional Certifications + Inspiring conversations about mastering your learning & speaking skills

Learners’ Edition: AI-powered Coaching, Professional Certifications + Inspiring conversations about mastering your learning & speaking skills — from linkedin.com by Tomer Cohen

Excerpts:

1. Your own AI-powered coaching
Learners can go into LinkedIn Learning and ask a question or explain a challenge they are currently facing at work (we’re focusing on areas within Leadership and Management to start). AI-powered coaching will pull from the collective knowledge of our expansive LinkedIn Learning library and, instantaneously, offer advice, examples, or feedback that is personalized to the learner’s skills, job, and career goals.

What makes us so excited about this launch is we can now take everything we as LinkedIn know about people’s careers and how they navigate them and help accelerate them with AI.

3. Learn exactly what you need to know for your next job
When looking for a new job, it’s often the time we think about refreshing our LinkedIn profiles. It’s also a time we can refresh our skills. And with skill sets for jobs having changed by 25% since 2015 – with the number expected to increase by 65% by 2030– keeping our skills a step ahead is one of the most important things we can do to stand out.

There are a couple of ways we’re making it easier to learn exactly what you need to know for your next job:

When you set a job alert, in addition to being notified about open jobs, we’ll recommend learning courses and Professional Certificate offerings to help you build the skills needed for that role.

When you view a job, we recommend specific courses to help you build the required skills. If you have LinkedIn Learning access through your company or as part of a Premium subscription, you can follow the skills for the job, that way we can let you know when we launch new courses for those skills and recommend you content on LinkedIn that better aligns to your career goals.


2024 Edtech Predictions from Edtech Insiders — from edtechinsiders.substack.com by Alex Sarlin, Ben Kornell, and Sarah Morin
Omni-modal AI, edtech funding prospects, higher ed wake up calls, focus on career training, and more!

Alex: I talked to the 360 Learning folks at one point and they had this really interesting epiphany, which is basically that it’s been almost impossible for every individual company in the past to create a hierarchy of skills and a hierarchy of positions and actually organize what it looks like for people to move around and upskill within the company and get to new paths.

Until now. AI actually can do this very well. It can take not only job description data, but it can take actual performance data. It can actually look at what people do on a daily basis and back fit that to training, create automatic training based on it.

From DSC:
I appreciated how they addressed K-12, higher ed, and the workforce all in one posting. Nice work. We don’t need siloes. We need more overall design thinking re: our learning ecosystems — as well as more collaborations. We need more on-ramps and pathways in a person’s learning/career journey.

 

The biggest things that happened in AI this year — from superhuman.ai by Zain Kahn

January:

  • Microsoft raises eyebrows with a huge $10 Billion investment in OpenAI.

February:

  • Meta launches Llama 2, their open-source rival to OpenAI’s models.
  • OpenAI announces ChatGPT Plus, a paid version of their chatbot.
  • Microsoft announces a new AI-powered Bing Search.

March:

  • OpenAI announces the powerful GPT-4 model, still considered to be the gold standard.
  • Midjourney releases V5, which brings AI-powered image generation one step closer to reality.
  • Microsoft launches Copilot for Microsoft 365.
  • Google launches Bard, its rival to ChatGPT.

…and more


AI 2023: A Year in Review — from stefanbauschard.substack.com by Stefan Bauschard
2023 developments in AI and a hint of what they are building toward

Some of the items that Stefan includes in his posting include:

  • ChatGPT and other language models that generate text.
  • Image generators.
  • Video generators.
  • AI models that that can read, hear, and speak.
  • AI models that can see.
  • Improving models.
  • “Multimodal” models.
  • Training on specific content.
  • Reasoning & planning.
  • …and several others

The Dictionary.com Word of the Year is “hallucinate.” — from content.dictionary.com by Nick Norlen and Grant Barrett; via The Rundown AI

hallucinate
[ huhloo-suh-neyt ]

verb
(of artificial intelligence) to produce false information contrary to the intent of the user and present it as if true and factual. Example: When chatbots hallucinate, the result is often not just inaccurate but completely fabricated.


Soon, every employee will be both AI builder and AI consumer — from zdnet.com by Joe McKendrick, via Robert Gibson on LinkedIn
“Standardized tools and platforms as well as advanced low- or no-code tech may enable all employees to become low-level engineers,” suggests a recent report.

The time could be ripe for a blurring of the lines between developers and end-users, a recent report out of Deloitte suggests. It makes more business sense to focus on bringing in citizen developers for ground-level programming, versus seeking superstar software engineers, the report’s authors argue, or — as they put it — “instead of transforming from a 1x to a 10x engineer, employees outside the tech division could be going from zero to one.”

Along these lines, see:

  • TECH TRENDS 2024 — from deloitte.com
    Six emerging technology trends demonstrate that in an age of generative machines, it’s more important than ever for organizations to maintain an integrated business strategy, a solid technology foundation, and a creative workforce.

UK Supreme Court rules AI is not an inventor — from theverge.com by Emilia David

The ruling follows a similar decision denying patent registrations naming AI as creators.

The UK Supreme Court ruled that AI cannot get patents, declaring it cannot be named as an inventor of new products because the law considers only humans or companies to be creators.


The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work — from nytimes.com by Michael M. Grynbaum and Ryan Mac

The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.

On this same topic, also see:


Apple’s iPhone Design Chief Enlisted by Jony Ive, Sam Altman to Work on AI Devices — from bloomberg.com by Mark Gurman (behind paywall)

  • Design executive Tang Tan is set to leave Apple in February
  • Tan will join Ive’s LoveFrom design studio, work on AI project

AI 2023: Chatbots Spark New Tools — from heatherbcooper.substack.com by Jeather Cooper

ChatGPT and Other Chatbots
The arrival of ChatGPT sparked tons of new AI tools and changed the way we thought about using a chatbot in our daily lives.

Chatbots like ChatGPT, Perplexity, Claude, and Bing Chat can help content creators by quickly generating ideas, outlines, drafts, and full pieces of content, allowing creators to produce more high-quality content in less time.

These AI tools boost efficiency and creativity in content production across formats like blog posts, social captions, newsletters, and more.


Microsoft’s next Surface laptops will reportedly be its first true ‘AI PCs’ — from theverge.com by Emma Roth
Next year’s Surface Laptop 6 and Surface Pro 10 will feature Arm and Intel options, according to Windows Central.

Microsoft is getting ready to upgrade its Surface lineup with new AI-enabled features, according to a report from Windows Central. Unnamed sources told the outlet the upcoming Surface Pro 10 and Surface Laptop 6 will come with a next-gen neural processing unit (NPU), along with Intel and Arm-based options.


How one of the world’s oldest newspapers is using AI to reinvent journalism — from theguardian.com by Alexandra Topping
Berrow’s Worcester Journal is one of several papers owned by the UK’s second biggest regional news publisher to hire ‘AI-assisted’ reporters

With the AI-assisted reporter churning out bread and butter content, other reporters in the newsroom are freed up to go to court, meet a councillor for a coffee or attend a village fete, says the Worcester News editor, Stephanie Preece.

“AI can’t be at the scene of a crash, in court, in a council meeting, it can’t visit a grieving family or look somebody in the eye and tell that they’re lying. All it does is free up the reporters to do more of that,” she says. “Instead of shying away from it, or being scared of it, we are saying AI is here to stay – so how can we harness it?”



What to Expect in AI in 2024 — from hai.stanford.edu by
Seven Stanford HAI faculty and fellows predict the biggest stories for next year in artificial intelligence.

Topics include:

  • White Collar Work Shifts
  • Deepfake Proliferation
  • GPUs Shortage
  • More Helpful Agents
  • Hopes for U.S. Regulation
  • Asking Big Questions, Applying New Policies
  • Companies Will Navigate Complicated Regulations

Addendum on 1/2/24:


 

Microsoft New Future of Work Report 2023 — from microsoft.com by various authors; via Stefan Bauschard

Throughout 2023, AI and the future of work have frequently been on the metaphorical – and often literal – front page around the world. There have been many excellent articles about the ways in which work may change as LLMs are increasingly integrated into our lives. As such, in this year’s report we focus specifically on areas that we think deserve additional attention or where there is research that has been done at Microsoft that offers a unique perspective. This is a report that should be read as a complement to the existing literature, rather than as a synthesis of all of it.

This is a rare time, one in which research will play a particularly important role in defining what the future of work looks like. At this special moment, scientists can’t just be passive observers of what is happening. Rather, we have the responsibility to shape work for the better. We hope this report can help our colleagues around world make progress towards this goal.
.

Microsoft New Future of Work Report 2023

Excerpt:

Analyzing and integrating may become more important skills than searching and creating 
With content being generated by AI, knowledge work may shift towards more analysis and critical integration

  • Information search as well as content production (manually typing, writing code, designing images) is greatly enhanced by AI, so general information work may shift to integrating and critically analyzing retrieved information
  • Writing with AI is shown to increase the amount of text produced as well as to increase writing efficiency (Biermann et al. 2022, Lee et al 2022)
  • With more generated text available, the skills of research, conceptualization, planning, prompting and editing may take on more importance as LLMs do the first round of production (e.g., Mollick 2023).
  • Skills not directly to content production, such as leading, dealing with critical social situations, navigating interpersonal trust issues, and demonstrating emotional intelligence, may all be more valued in the workplace (LinkedIn 2023)
 

The rise of AI fake news is creating a ‘misinformation superspreader’ — from washingtonpost.com by Pranshu Verma
AI is making it easy for anyone to create propaganda outlets, producing content that can be hard to differentiate from real news

Artificial intelligence is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminates false information about elections, wars and natural disasters.

Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organization that tracks misinformation.

Historically, propaganda operations have relied on armies of low-paid workers or highly coordinated intelligence organizations to build sites that appear to be legitimate. But AI is making it easy for nearly anyone — whether they are part of a spy agency or just a teenager in their basement — to create these outlets, producing content that is at times hard to differentiate from real news.


AI, and everything else — from pitch.com by Benedict Evans


Chevy Chatbots Go Rogue — from
How a customer service chatbot made a splash on social media; write your holiday cards with AI

Their AI chatbot, designed to assist customers in their vehicle search, became a social media sensation for all the wrong reasons. One user even convinced the chatbot to agree to sell a 2024 Chevy Tahoe for just one dollar!

This story is exactly why AI implementation needs to be approached strategically. Learning to use AI, also means learning to build thinking of the guardrails and boundaries.

Here’s our tips.


Rite Aid used facial recognition on shoppers, fueling harassment, FTC says — from washingtonpost.com by Drew Harwell
A landmark settlement over the pharmacy chain’s use of the surveillance technology could raise further doubts about facial recognition’s use in stores, airports and other venues

The pharmacy chain Rite Aid misused facial recognition technology in a way that subjected shoppers to unfair searches and humiliation, the Federal Trade Commission said Tuesday, part of a landmark settlement that could raise questions about the technology’s use in stores, airports and other venues nationwide.

But the chain’s “reckless” failure to adopt safeguards, coupled with the technology’s long history of inaccurate matches and racial biases, ultimately led store employees to falsely accuse shoppers of theft, leading to “embarrassment, harassment, and other harm” in front of their family members, co-workers and friends, the FTC said in a statement.


 
© 2024 | Daniel Christian