Also relevant/see:
Radar Trends to Watch: May 2023 Developments in Programming, Security, Web, and More — from oreilly.com by Mike Loukides
Excerpt:
Large language models continue to colonize the technology landscape. They’ve broken out of the AI category, and now are showing up in security, programming, and even the web. That’s a natural progression, and not something we should be afraid of: they’re not coming for our jobs. But they are remaking the technology industry.
One part of this remaking is the proliferation of “small” large language models. We’ve noted the appearance of llama.cpp, Alpaca, Vicuna, Dolly 2.0, Koala, and a few others. But that’s just the tip of the iceberg. Small LLMs are appearing every day, and some will even run in a web browser. This trend promises to be even more important than the rise of the “large” LLMs, like GPT-4. Only a few organizations can build, train, and run the large LLMs. But almost anyone can train a small LLM that will run on a well-equipped laptop or desktop.
Work Shift: How AI Might Upend Pay — from bloomberg.com by Jo Constantz
Excerpt:
This all means that a time may be coming when companies need to compensate star employees for their input to AI tools rather than their just their output, which may not ultimately look much different from their AI-assisted colleagues.
“It wouldn’t be far-fetched for them to put even more of a premium on those people because now that kind of skill gets amplified and multiplied throughout the organization,” said Erik Brynjolfsson, a Stanford professor and one of the study’s authors. “Now that top worker could change the whole organization.”
Of course, there’s a risk that companies won’t heed that advice. If AI levels performance, some executives may flatten the pay scale accordingly. Businesses would then potentially save on costs — but they would also risk losing their top performers, who wouldn’t be properly compensated for the true value of their contributions under this system.
US Supreme Court rejects computer scientist’s lawsuit over AI-generated inventions — from reuters.com by Blake Brittain
Excerpt:
WASHINGTON, April 24 – The U.S. Supreme Court on Monday declined to hear a challenge by computer scientist Stephen Thaler to the U.S. Patent and Trademark Office’s refusal to issue patents for inventions his artificial intelligence system created.
The justices turned away Thaler’s appeal of a lower court’s ruling that patents can be issued only to human inventors and that his AI system could not be considered the legal creator of two inventions that he has said it generated.
Deep learning pioneer Geoffrey Hinton has quit Google — from technologyreview.com by Will Douglas Heaven
Hinton will be speaking at EmTech Digital on Wednesday.
Excerpt:
Geoffrey Hinton, a VP and engineering fellow at Google and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years, the New York Times reported today.
According to the Times, Hinton says he has new fears about the technology he helped usher in and wants to speak openly about them, and that a part of him now regrets his life’s work.
***
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023
What Is Agent Assist? — from blogs.nvidia.com
Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across retail, telecom and other industries conduct conversations with customers.
Excerpt:
Agent assist technology uses AI and machine learning to provide facts and make real-time suggestions that help human agents across telecom, retail and other industries conduct conversations with customers.
It can integrate with contact centers’ existing applications, provide faster onboarding for agents, improve the accuracy and efficiency of their responses, and increase customer satisfaction and loyalty.
From DSC:
Is this type of thing going to provide a learning assistant/agent as well?
A chatbot that asks questions could help you spot when it makes no sense — from technologyreview.com by Melissa Heikkilä
Engaging our critical thinking is one way to stop getting fooled by lying AI.
Excerpt:
AI chatbots like ChatGPT, Bing, and Bard are excellent at crafting sentences that sound like human writing. But they often present falsehoods as facts and have inconsistent logic, and that can be hard to spot.
One way around this problem, a new study suggests, is to change the way the AI presents information. Getting users to engage more actively with the chatbot’s statements might help them think more critically about that content.
Stability AI releases DeepFloyd IF, a powerful text-to-image model that can smartly integrate text into images — from stability.ai
New AI Powered Denoise in PhotoShop — from jeadigitalmedia.org
In the most recent update, Adobe is now using AI to Denoise, Enhance and create Super Resolution or 2x the file size of the original photo. Click here to read Adobe’s post and below are photos of how I used the new AI Denoise on a photo. The big trick is that photos have to be shot in RAW.
In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT’s development and get Brockman’s take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.
Also relevant/see:
- OpenAI’s CEO Says the Age of Giant AI Models Is Already Over — from wired.com by Will Knight
Sam Altman says the research strategy that birthed ChatGPT is played out and future strides in artificial intelligence will require new ideas.
Justice Through Code — from centerforjustice.columbia.edu by ; via Matt Tower
Unlocking Potential for the 80+ Million Americans with a Conviction History.
Excerpt:
A world where every person, regardless of past convictions or incarceration can access life-sustaining and meaningful careers.
We are working to make this vision a reality through our technical and professional career development accelerators.
Our Mission: We educate and nurture talent with conviction histories to create a more just and diverse workforce. We increase workplace equity through partnerships that educate and prepare teams to create supportive pathways to careers that end the cycle of poverty that contributes to incarceration and recidivism.
JTC is jointly offered by Columbia University’s Center for Justice, and the Tamer Center for Social Enterprise at the Columbia Business School.
From DSC:
After seeing this…
“Make me an app”—just talk to your @Replit app to make software pic.twitter.com/U1v5m5Un1U
— Amjad Masad ? (@amasad) March 24, 2023
…I wondered:
- Could GPT-4 create the “Choir Practice” app mentioned below?
(Choir Practice was an idea for an app for people who want to rehearse their parts at home) - Could GPT-4 be used to extract audio/parts from a musical score and post the parts separately for people to download/practice their individual parts?
This line of thought reminded me of this posting that I did back on 10/27/2010 entitled, “For those institutions (or individuals) who might want to make a few million.”
And I want to say that when I went back to look at this posting, I was a bit ashamed of myself. I’d like to apologize for the times when I’ve been too excited about something and exaggerated/hyped an idea up on this Learning Ecosystems blog. For example, I used the words millions of dollars in the title…and that probably wouldn’t be the case these days. (But with inflation being what it is, heh…who knows!? Maybe I shouldn’t be too hard on myself.) I just had choirs in mind when I posted the idea…and there aren’t as many choirs around these days. 🙂
a big deal: @elonmusk, Y. Bengio, S. Russell, ??@tegmark?, V. Kraknova, P. Maes, ?@Grady_Booch, ?@AndrewYang?, ?@tristanharris? & over 1,000 others, including me, have called for a temporary pause on training systems exceeding GPT-4 https://t.co/PJ5YFu0xm9
— Gary Marcus (@GaryMarcus) March 29, 2023
The above Tweet links to:
Pause Giant AI Experiments: An Open Letter — from futureoflife.org
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
Elon Musk, Steve Wozniak and dozens of top scientists concerned about the technology moving too fast have signed an open letter asking companies to pull back on artificial intelligence. @trevorlault reports on the new A.I. plea. pic.twitter.com/Vu9QlKfV8C
— Good Morning America (@GMA) March 30, 2023
However, the letter has since received heavy backlash, as there seems to be no verification in signing it. Yann LeCun from Meta denied signing the letter and completely disagreed with the premise. (source)
Nope.
I did not sign this letter.
I disagree with its premise. https://t.co/DoXwIZDcOx— Yann LeCun (@ylecun) March 29, 2023
In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT — from wired.com by Will Knight (behind paywall)
Tech luminaries, renowned scientists, and Elon Musk warn of an “out-of-control race” to develop and deploy ever-more-powerful AI systems.
1/The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea.
I'm seeing many new applications in education, healthcare, food, … that'll help many people. Improving GPT-4 will help. Lets balance the huge value AI is creating vs. realistic risks.
— Andrew Ng (@AndrewYNg) March 29, 2023
A quick and sobering guide to cloning yourself — from oneusefulthing.substack.com by Professor Ethan Mollick
It took me a few minutes to create a fake me giving a fake lecture.
Excerpt:
I think a lot of people do not realize how rapidly the multiple strands of generative AI (audio, text, images, and video) are advancing, and what that means for the future.
With just a photograph and 60 seconds of audio, you can now create a deepfake of yourself in just a matter of minutes by combining a few cheap AI tools. I’ve tried it myself, and the results are mind-blowing, even if they’re not completely convincing. Just a few months ago, this was impossible. Now, it’s a reality.
To start, you should probably watch the short video of Virtual Me and Real Me giving the same talk about entrepreneurship. Nothing about the Virtual Me part of the video is real, even the script was completely AI-generated.
.
From DSC:
Also, I wanted to post the resource below just because I think it’s an excellent question!
If ChatGPT Can Disrupt Google In 2023, What About Your Company? — from forbes.com by Glenn Gow
Excerpts:
Board members and corporate execs don’t need AI to decode the lessons to be learned from this. The lessons should be loud and clear: If even the mighty Google can be potentially overthrown by AI disruption, you should be concerned about what this may mean for your company.
…
Professions that will be disrupted by generative AI include marketing, copywriting, illustration and design, sales, customer support, software coding, video editing, film-making, 3D modeling, architecture, engineering, gaming, music production, legal contracts, and even scientific research. Software applications will soon emerge that will make it easy and intuitive for anyone to use generative AI for those fields and more.
.
Canary in the coal mine for coding bootcamps? — from theview.substack.com by gordonmacrae; with thanks to Mr. Ryan Craig for this resource
Excerpt:
If you run a software development bootcamp, a recent Burning Glass institute report should keep you awake at night.
The report, titled How Skills Are Disrupting Work, looks at a decade of labor market analysis and identifies how digital skill training and credentials have responded to new jobs.
Three trends stuck out to me:
- The most future-proof skills aren’t technical
- Demand for software development is in decline
- One in eight postings feature just four skill sets
These three trends should sound a warning for software development bootcamps, in particular. Let’s see why, and how you can prepare to face the coming challenges.
Also relevant/see:
Issue #14: Trends in Bootcamps — from theview.substack.com by gordonmacrae
Excerpt:
Further consolidation of smaller providers seems likely to continue in 2023. A number of VC-backed providers will run out of money.
A lot of bootcamps will be available cheaply for any larger providers, or management companies. Growth will continue to be an option in the Middle East, as funding doesn’t look like drying up any time soon. And look for the larger bootcamps to expand into hire-train-deploy, apprenticeships or licensing.
As Alberto pointed out this week, it’s hard for bootcamps to sustain the growth trajectory VC’s expect. But there are other options available.
Top List: The Best Mobile Learning Content Development Companies (2023) — from elearningindustry.com by Christopher Pappas
Summary:
Working remotely has brought a major shift to corporate training, making mobile learning more important than ever. Your top assets, and frankly your whole staff, will need to adjust to this new reality. To help you out, we decided to gather the best content providers for mobile learning in one place. Explore our top list and find the right partner to start your mobile learning project or even develop your own mobile app. Are you ready?
9 ways ChatGPT saves me hours of work every day, and why you’ll never outcompete those who use AI effectively. — from linkedin.com by Santiago Valdarrama
Excerpts:
A list for those who write code:
- Explaining code…
- Improve existing code…
- Rewriting code using the correct style…
- Rewriting code using idiomatic constructs…
- Simplifying code…
- Writing test cases…
- Exploring alternatives…
- Writing documentation…
- Tracking down bugs…
Also relevant/see:
A Chat With Dead Legends & 3 Forecasts: The Return of Socratic Method, Assertive Falsehoods, & What’s Investable? — from implications.com by Scott Belsky
A rare “Cambrian Moment” is upon us, and the implications are both mind blowing and equally concerning, let’s explore the impact of a few forecasts in particular.
Excerpts:
Three forecasts are becoming clear…
- Education will be reimagined by AI tools.
- AI-powered results will be both highly confident and often wrong, this dangerous combo of inconsistent accuracy with high authority and assertiveness will be the long final mile to overcome.
- The defensibility of these AI capabilities as stand-alone companies will rely on data moats, privacy preferences for consumers and enterprises, developer ecosystems, and GTM advantages. (still brewing, but let’s discuss)
…
As I suggested in Edition 1, ChatGPT has done to writing what the calculator did to arithmetic. But what other implications can we expect here?
- The return of the Socratic method, at scale and on-demand…
- The art and science of prompt engineering…
- The bar for teaching will rise, as traditional research for paper-writing and memorization become antiquated ways of building knowledge.
From DSC:
Check this confluence of emerging technologies out!
Natural language interfaces have truly arrived. Here’s ChatARKit: an open source demo using #chatgpt to create experiences in #arkit. How does it work? Read on. (1/) pic.twitter.com/R2pYKS5RBq
— Bart Trzynadlowski (@BartronPolygon) December 21, 2022
Also see:
The Future of Education Using AR
via @gigadgets_ #AR #AugmentedReality #MR #mixedreality #ai #technology #vr #virtualreality #innovation #edtech #tech #future #medtech #healthtech #education #iot #teacher #classroom #mi #futurism #digitalasset #edutech pic.twitter.com/ZOP0l2kkoR
— Fred Steube (@steube) December 23, 2022
How to spot AI-generated text — from technologyreview.com by Melissa Heikkilä
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Excerpt:
This sentence was written by an AI—or was it? OpenAI’s new chatbot, ChatGPT, presents us with a problem: How will we know whether what we read online is written by a human or a machine?
…
“If you have enough text, a really easy cue is the word ‘the’ occurs too many times,” says Daphne Ippolito, a senior research scientist at Google Brain, the company’s research unit for deep learning.
…
“A typo in the text is actually a really good indicator that it was human-written,” she adds.
7 Best Tech Developments of 2022 — from /thetechranch.comby
Excerpt:
As we near the end of 2022, it’s a great time to look back at some of the top technologies that have emerged this year. From AI and virtual reality to renewable energy and biotechnology, there have been a number of exciting developments that have the potential to shape the future in a big way. Here are some of the top technologies that have emerged in 2022: