Also relevant/see:

We have moved from Human Teachers and Human Learners, as a diad to AI Teachers and AI Learners as a tetrad.


 

Nvidia will bring AI to every industry, says CEO Jensen Huang in GTC keynote: ‘We are at the iPhone moment of AI’ — from venturebeat.com by Sharon Goldman

Excerpt:

As Nvidia’s annual GTC conference gets underway, founder and CEO Jensen Huang, in his characteristic leather jacket and standing in front of a vertical green wall at Nvidia headquarters in Santa Clara, California, delivered a highly-anticipated keynote that focused almost entirely on AI. His presentation announced partnerships with Google, Microsoft and Oracle, among others, to bring new AI, simulation and collaboration capabilities to “every industry.”

Introducing Mozilla.ai: Investing in trustworthy AI — from blog.mozilla.org by Mark Surman
We’re committing $30M to build Mozilla.ai: A startup — and a community — building a trustworthy, independent, and open-source AI ecosystem.

Excerpt (emphasis DSC):

We’re only three months into 2023, and it’s already clear what one of the biggest stories of the year is: AI. AI has seized the public’s attention like Netscape did in 1994, and the iPhone did in 2007.

New tools like Stable Diffusion and the just-released GPT-4 are reshaping not just how we think about the internet, but also communication and creativity and society at large. Meanwhile, relatively older AI tools like the recommendation engines that power YouTube, TikTok and other social apps are growing even more powerful — and continuing to influence billions of lives.

This new wave of AI has generated excitement, but also significant apprehension. We aren’t just wondering What’s possible? and How can people benefit? We’re also wondering What could go wrong? and How can we address it? Two decades of social media, smartphones and their consequences have made us leery.    

ChatGPT plugins — from openai.com

Excerpt:

Users have been asking for plugins since we launched ChatGPT (and many developers are experimenting with similar ideas) because they unlock a vast range of possible use cases. We’re starting with a small set of users and are planning to gradually roll out larger-scale access as we learn more (for plugin developers, ChatGPT users, and after an alpha period, API users who would like to integrate plugins into their products). We’re excited to build a community shaping the future of the human–AI interaction paradigm.



Bots like ChatGPT aren’t sentient. Why do we insist on making them seem like they are? — from cbc.ca by Matt Meuse
‘There’s no secret homunculus inside the system that’s understanding what you’re talking about’

Excerpt:

LLMs like ChatGPT are trained on massive troves of text, which they use to assemble responses to questions by analyzing and predicting what words could most plausibly come next based on the context of other words. One way to think of it, as Marcus has memorably described it, is “auto-complete on steroids.”

Marcus says it’s important to understand that even though the results sound human, these systems don’t “understand” the words or the concepts behind them in any meaningful way. But because the results are so convincing, that can be easy to forget.

“We’re doing a kind of anthropomorphization … where we’re attributing some kind of animacy and life and intelligence there that isn’t really,” he said.


10 gifts we unboxed at Canva Create — from canva.com
Earlier this week we dropped 10 unopened gifts onto the Canva homepage of 125 million people across the globe. Today, we unwrapped them on the stage at Canva Create.


Google Bard Plagiarized Our Article, Then Apologized When Caught — from tomshardware.com by Avram Piltch
The chatbot implied that it had conducted its own CPU tests.

 

Planning for AGI and beyond — from OpenAI.org by Sam Altman

Excerpt:

There are several things we think are important to do now to prepare for AGI.

First, as we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally.

A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.

*AGI stands for Artificial General Intelligence

 

College Inside - a biweekly newsletter about the future of postsecondary education in prisons

The future of computer programming in prison – College Inside; written by Open Campus national reporter Charlotte West.
A biweekly newsletter about the future of postsecondary education in prisons.

Excerpt:

Participant Leonard Bishop hadn’t touched technology in the 17 years he served in the federal system prior to transferring to the D.C. Jail in 2018. When he first got a tablet, he said it took him a few days to figure out how to navigate through it, but then “I couldn’t put it down.”

Bishop said he was surprised by how easy it was to learn the skills he needed to earn the AWS certification. “It helps you transition back into society, especially for someone who has been gone so long,” he said.


Also relevant/see:

This AWS Cloud certification program opens new paths for inmates — from amazon.com; with thanks to Paul Fain for this resource
A jail-based program aims to expand career opportunities through cloud-skills training.

Excerpt:

Julian Blair knew nothing about cloud computing when he became incarcerated in a Washington, D.C. jail more than two years ago.

“I’d never done anything with a computer besides video games, typing papers in college, and downloading music on an iPad,” said Blair.

Now, after three months of work with an educational program led by APDS and Amazon Web Services (AWS) inside the jail, Blair and 10 other residents at the facility have successfully passed the AWS Certified Cloud Practitioner exam.


 

From DSC:
Below are several months’ worth of labor market updates from Handshake’s blog — with thanks to Paul Fain for this resource.

February 2023 Early Talent Labor Market Update — from joinhandshake.com/blog/
Demand for tech talent outside of coastal states

Key takeaways

  • Tech hubs no longer? Traditional tech hubs like California and New York are seeing fewer entry-level job openings for technical talent and less interest from students.
  • A rising tech diaspora: States in other parts of the country, like Iowa (+10.9%) and Maryland (+5.1%), are emerging as locations with more job postings for technical roles—at the same time, students have demonstrated interest in applying to opportunities in those states.
  • New hubs for tech talent: Students are demonstrating greater openness to a wider array of geographic locations for tech roles with large increases in applications per job in states like Oklahoma (4.8x), Arizona (2.3x), and Oregon (2.7x).

January 2023 Early Talent Labor Market Updates — from joinhandshake.com/blog/
Employers still have strong demand for entry-level technical talent.

Key takeaways
2022 was a difficult year for workers in tech, as the industry was hit hard by hiring freezes and layoffs. Despite an overall slowdown in tech sector hiring, there are several bright spots in the technical labor market that should give early talent reason for optimism.

  • Tech industry is still investing in (tech) talent
  • Software and computer tech jobs outside of tech industry
  • Tech, but not in software

December 2022 Early Talent Labor Market Updates — from joinhandshake.com/blog/
Part-time jobs falling year over year with retail leading the way

Key takeaways

  1. Demand for early talent to fill part-time roles has dropped compared to earlier this year, and part-time jobs saw the largest year-over-year decrease (-32%) in job postings on the platform, when compared to full-time and internships.
  2. Notably employers who are still creating new jobs have been hiring more for full-time roles and internships with job postings per employer up 1% and 7% respectively.
  3. The retail industry, which relies on part-time workers through the holiday season, has seen the greatest year-over-year decrease in part-time job postings that were listed between October and November with a drop of 51%
 

ChatGPT sets record for fastest-growing user base – analyst note — from reuters.com by Krystal Hu

Excerpt (emphasis DSC):

Feb 1 (Reuters) – ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history, according to a UBS study on Wednesday.

The report, citing data from analytics firm Similarweb, said an average of about 13 million unique visitors had used ChatGPT per day in January, more than double the levels of December.

“In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app,” UBS analysts wrote in the note.


From DSC:
This reminds me of the current exponential pace of change that we are experiencing…

..and how we struggle with that kind of pace.

 

Radar Trends to Watch: February 2023 — from oreilly.com by Mike Loukides
Developments in Data, Programming, Security, and More

Excerpt:

One application for ChatGPT is writing documentation for developers, and providing a conversational search engine for the documentation and code. Writing internal documentation is an often omitted part of any software project.

DoNotPay has developed an AI “lawyer” that is helping a defendant make arguments in court. The lawyer runs on a cell phone, through which it hears the proceedings. It tells the defendant what to say through Bluetooth earbuds. DoNotPay’s CEO notes that this is illegal in almost all courtrooms. (After receiving threats from bar associations, DoNotPay has abandoned this trial.)

Matter, a standard for smart home connectivity, appears to be gaining momentum. Among other things, it allows devices to interact with a common controller, rather than an app (and possibly a hub) for each device.

 

Canary in the coal mine for coding bootcamps? — from theview.substack.com by gordonmacrae; with thanks to Mr. Ryan Craig for this resource

Excerpt:

If you run a software development bootcamp, a recent Burning Glass institute report should keep you awake at night.

The report, titled How Skills Are Disrupting Work, looks at a decade of labor market analysis and identifies how digital skill training and credentials have responded to new jobs.

Three trends stuck out to me:

  • The most future-proof skills aren’t technical
  • Demand for software development is in decline
  • One in eight postings feature just four skill sets

These three trends should sound a warning for software development bootcamps, in particular. Let’s see why, and how you can prepare to face the coming challenges.


Also relevant/see:

Issue #14: Trends in Bootcamps — from theview.substack.com by gordonmacrae

Excerpt:

Further consolidation of smaller providers seems likely to continue in 2023. A number of VC-backed providers will run out of money.

A lot of bootcamps will be available cheaply for any larger providers, or management companies. Growth will continue to be an option in the Middle East, as funding doesn’t look like drying up any time soon. And look for the larger bootcamps to expand into hire-train-deploy, apprenticeships or licensing.

As Alberto pointed out this week, it’s hard for bootcamps to sustain the growth trajectory VC’s expect. But there are other options available.


 

9 ways ChatGPT saves me hours of work every day, and why you’ll never outcompete those who use AI effectively. — from linkedin.com by Santiago Valdarrama

Excerpts:

A list for those who write code:

  1. Explaining code…
  2. Improve existing code…
  3. Rewriting code using the correct style…
  4. Rewriting code using idiomatic constructs…
  5. Simplifying code…
  6. Writing test cases…
  7. Exploring alternatives…
  8. Writing documentation…
  9. Tracking down bugs…

Also relevant/see:

A Chat With Dead Legends & 3 Forecasts: The Return of Socratic Method, Assertive Falsehoods, & What’s Investable? — from implications.com by Scott Belsky
A rare “Cambrian Moment” is upon us, and the implications are both mind blowing and equally concerning, let’s explore the impact of a few forecasts in particular.

Excerpts:

Three forecasts are becoming clear…

  • Education will be reimagined by AI tools.
  • AI-powered results will be both highly confident and often wrong, this dangerous combo of inconsistent accuracy with high authority and assertiveness will be the long final mile to overcome.
  • The defensibility of these AI capabilities as stand-alone companies will rely on data moats, privacy preferences for consumers and enterprises, developer ecosystems, and GTM advantages. (still brewing, but let’s discuss)

As I suggested in Edition 1, ChatGPT has done to writing what the calculator did to arithmetic. But what other implications can we expect here?

  • The return of the Socratic method, at scale and on-demand…
  • The art and science of prompt engineering…
  • The bar for teaching will rise, as traditional research for paper-writing and memorization become antiquated ways of building knowledge.

 

From DSC:
Check out the items below. As with most technologies, there are likely going to be plusses & minuses regarding the use of AI in digital video, communications, arts, and music.



Also see:


Also somewhat relevant, see:

 


Also relevant/see:

 

From DSC:
Check this confluence of emerging technologies out!

Also see:

How to spot AI-generated text — from technologyreview.com by Melissa Heikkilä
The internet is increasingly awash with text written by AI software. We need new tools to detect it.

Excerpt:

This sentence was written by an AI—or was it? OpenAI’s new chatbot, ChatGPT, presents us with a problem: How will we know whether what we read online is written by a human or a machine?

“If you have enough text, a really easy cue is the word ‘the’ occurs too many times,” says Daphne Ippolito, a senior research scientist at Google Brain, the company’s research unit for deep learning.

“A typo in the text is actually a really good indicator that it was human-written,” she adds.

7 Best Tech Developments of 2022 — from /thetechranch.comby

Excerpt:

As we near the end of 2022, it’s a great time to look back at some of the top technologies that have emerged this year. From AI and virtual reality to renewable energy and biotechnology, there have been a number of exciting developments that have the potential to shape the future in a big way. Here are some of the top technologies that have emerged in 2022:

 

AI bot ChatGPT stuns academics with essay-writing skills and usability — from theguardian.com by Alex Hern
Latest chatbot from Elon Musk-founded OpenAI can identify incorrect premises and refuse to answer inappropriate requests

Excerpt:

Professors, programmers and journalists could all be out of a job in just a few years, after the latest chatbot from the Elon Musk-founded OpenAI foundation stunned onlookers with its writing ability, proficiency at complex tasks, and ease of use.

The system, called ChatGPT, is the latest evolution of the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to generate an opinion piece for the Guardian, and ChatGPT has significant further capabilities.

In the days since it was released, academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate, and programmers have used the tool to solve coding challenges in obscure programming languages in a matter of seconds – before writing limericks explaining the functionality.

 


Also related/see:


AI and the future of undergraduate writing — from chronicle.com by Beth McMurtrie

Excerpts:

Is the college essay dead? Are hordes of students going to use artificial intelligence to cheat on their writing assignments? Has machine learning reached the point where auto-generated text looks like what a typical first-year student might produce?

And what does it mean for professors if the answer to those questions is “yes”?

Scholars of teaching, writing, and digital literacy say there’s no doubt that tools like ChatGPT will, in some shape or form, become part of everyday writing, the way calculators and computers have become integral to math and science. It is critical, they say, to begin conversations with students and colleagues about how to shape and harness these AI tools as an aide, rather than a substitute, for learning.

“Academia really has to look at itself in the mirror and decide what it’s going to be,” said Josh Eyler, director of the Center for Excellence in Teaching and Learning at the University of Mississippi, who has criticized the “moral panic” he has seen in response to ChatGPT. “Is it going to be more concerned with compliance and policing behaviors and trying to get out in front of cheating, without any evidence to support whether or not that’s actually going to happen? Or does it want to think about trust in students as its first reaction and building that trust into its response and its pedagogy?”

 

 

 

ChatGPT Could Be AI’s iPhone Moment — from bloomberg.com by Vlad Savov; with thanks to Dany DeGrave for his Tweet on this

Excerpt:

The thing is, a good toy has a huge advantage: People love to play with it, and the more they do, the quicker its designers can make it into something more. People are documenting their experiences with ChatGPT on Twitter, looking like giddy kids experimenting with something they’re not even sure they should be allowed to have. There’s humor, discovery and a game of figuring out the limitations of the system.

 


And on the legal side of things:


 

Radar Trends to Watch: December 2022 — from oreilly.com by Mike Loukides
Developments in Security, Cryptocurrency, Web, and More

Excerpt:

This month’s news has been overshadowed by the implosion of SBF’s TFX and the possible implosion of Elon Musk’s Twitter. All the noise doesn’t mean that important things aren’t happening. Many companies, organizations, and individuals are wrestling with the copyright implications of generative AI. Google is playing a long game: they believe that the goal isn’t to imitate art works, but to build better user interfaces for humans to collaborate with AI so they can create something new. Facebook’s AI for playing Diplomacy is an exciting new development. Diplomacy requires players to negotiate with other players, assess their mental state, and decide whether or not to honor their commitments. None of these are easy tasks for an AI. And IBM now has a 433 Qubit quantum chip–an important step towards making a useful quantum processor.

 

Resources for Computer Science Education Week (December 5-11, 2022) — with thanks to Mark Adams for these resources

Per Mark, here are a few resources that are intended to show students how computers can become part of their outside interests as well as in their future careers.

Educating Engineers

Maryville University

Fullstack Academy

Also see:

Computer Science Education Week is December 5-11, 2022

 
© 2022 | Daniel Christian