ChatGPT Creator Is Talking to Investors About Selling Shares at $29 Billion Valuation — from wsj.com by Berber Jin and Miles Kruppa
Tender offer at that valuation would make OpenAI one of the most valuable U.S. startups

Here’s how Microsoft could use ChatGPT — from The Algorithm by Melissa Heikkilä

Excerpt (emphasis DSC):

Microsoft is reportedly eyeing a $10 billion investment in OpenAI, the startup that created the viral chatbot ChatGPT, and is planning to integrate it into Office products and Bing search. The tech giant has already invested at least $1 billion into OpenAI. Some of these features might be rolling out as early as March, according to The Information.

This is a big deal. If successful, it will bring powerful AI tools to the masses. So what would ChatGPT-powered Microsoft products look like? We asked Microsoft and OpenAI. Neither was willing to answer our questions on how they plan to integrate AI-powered products into Microsoft’s tools, even though work must be well underway to do so. However, we do know enough to make some informed, intelligent guesses. Hint: it’s probably good news if, like me, you find creating PowerPoint presentations and answering emails boring.

And speaking of Microsoft and AI, also see:

I have maintained for several years, including a book ‘AI for Learning’, that AI is the technology of the age and will change everything. This is unfolding as we speak but it is interesting to ask who the winners are likely to be.

Donald Clark

The Expanding Dark Forest and Generative AI — from maggieappleton.com by
Proving you’re a human on a web flooded with generative AI content

Assumed audience:

People who have heard of GPT-3 / ChatGPT, and are vaguely following the advances in machine learning, large language models, and image generators. Also people who care about making the web a flourishing social and intellectual space.

That dark forest is about to expand. Large Language Models (LLMs) that can instantly generate coherent swaths of human-like text have just joined the party.

 

DeepMind CEO Demis Hassabis Urges Caution on AI — from time.com by Billy Perrigo

It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things.”

“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs.

Demis Hassabis 

Excerpt (emphasis DSC):

Hassabis says these efforts are just the beginning. He and his colleagues have been working toward a much grander ambition: creating artificial general intelligence, or AGI, by building machines that can think, learn, and be set to solve humanity’s toughest problems. Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.

But with AI’s promise also comes peril. In recent months, researchers building an AI system to design new drugs revealed that their tool could be easily repurposed to make deadly new chemicals. A separate AI model trained to spew out toxic hate speech went viral, exemplifying the risk to vulnerable communities online. And inside AI labs around the world, policy experts were grappling with near-term questions like what to do when an AI has the potential to be commandeered by rogue states to mount widespread hacking campaigns or infer state-level nuclear secrets.

AI-assisted plagiarism? ChatGPT bot says it has an answer for that — from theguardian.com by Alex Hern
Silicon Valley firm insists its new text generator, which writes human-sounding essays, can overcome fears over cheating

Excerpt:

Headteachers and university lecturers have expressed concerns that ChatGPT, which can provide convincing human-sounding answers to exam questions, could spark a wave of cheating in homework and exam coursework.

Now, the bot’s makers, San Francisco-based OpenAI, are trying to counter the risk by “watermarking” the bot’s output and making plagiarism easier to spot.

Schools Shouldn’t Ban Access to ChatGPT — from time.com by Joanne Lipman and Rebecca Distler

Excerpt (emphasis DSC):

Students need now, more than ever, to understand how to navigate a world in which artificial intelligence is increasingly woven into everyday life. It’s a world that they, ultimately, will shape.

We hail from two professional fields that have an outsize interest in this debate. Joanne is a veteran journalist and editor deeply concerned about the potential for plagiarism and misinformation. Rebecca is a public health expert focused on artificial intelligence, who champions equitable adoption of new technologies.

We are also mother and daughter. Our dinner-table conversations have become a microcosm of the argument around ChatGPT, weighing its very real dangers against its equally real promise. Yet we both firmly believe that a blanket ban is a missed opportunity.

ChatGPT: Threat or Menace? — from insidehighered.com by Steven Mintz
Are fears about generative AI warranted?

And see Joshua Kim’s A Friendly Attempt to Balance Steve Mintz’s Piece on Higher Ed Hard Truths out at nsidehighered.com | Comparing the health care and higher ed systems.

 



What Leaders Should Know About Emerging Technologies — from forbes.com by Benjamin Laker

Excerpt (emphasis DSC):

The rapid pace of change is driven by a “perfect storm” of factors, including the falling cost of computing power, the rise of data-driven decision-making, and the increasing availability of new technologies. “The speed of current breakthroughs has no historical precedent,” concluded Andrew Doxsey, co-founder of Libra Incentix, in an interview. “Unlike previous technological revolutions, the Fourth Industrial Revolution is evolving exponentially rather than linearly. Furthermore, it disrupts almost every industry worldwide.”

I asked ChatGPT to write my cover letters. 2 hiring managers said they would have given me an interview but the letters lacked personality. — from businessinsider.com by Beatrice Nolan

Key points:

  • An updated version of the AI chatbot ChatGPT was recently released to the public.
  • I got the chatbot to write cover letters for real jobs and asked hiring managers what they thought.
  • The managers said they would’ve given me a call but that the letters lacked personality.

.



 

From DSC:
Check out the items below. As with most technologies, there are likely going to be plusses & minuses regarding the use of AI in digital video, communications, arts, and music.



Also see:


Also somewhat relevant, see:

 

AI bot ChatGPT stuns academics with essay-writing skills and usability — from theguardian.com by Alex Hern
Latest chatbot from Elon Musk-founded OpenAI can identify incorrect premises and refuse to answer inappropriate requests

Excerpt:

Professors, programmers and journalists could all be out of a job in just a few years, after the latest chatbot from the Elon Musk-founded OpenAI foundation stunned onlookers with its writing ability, proficiency at complex tasks, and ease of use.

The system, called ChatGPT, is the latest evolution of the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to generate an opinion piece for the Guardian, and ChatGPT has significant further capabilities.

In the days since it was released, academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate, and programmers have used the tool to solve coding challenges in obscure programming languages in a matter of seconds – before writing limericks explaining the functionality.

 


Also related/see:


AI and the future of undergraduate writing — from chronicle.com by Beth McMurtrie

Excerpts:

Is the college essay dead? Are hordes of students going to use artificial intelligence to cheat on their writing assignments? Has machine learning reached the point where auto-generated text looks like what a typical first-year student might produce?

And what does it mean for professors if the answer to those questions is “yes”?

Scholars of teaching, writing, and digital literacy say there’s no doubt that tools like ChatGPT will, in some shape or form, become part of everyday writing, the way calculators and computers have become integral to math and science. It is critical, they say, to begin conversations with students and colleagues about how to shape and harness these AI tools as an aide, rather than a substitute, for learning.

“Academia really has to look at itself in the mirror and decide what it’s going to be,” said Josh Eyler, director of the Center for Excellence in Teaching and Learning at the University of Mississippi, who has criticized the “moral panic” he has seen in response to ChatGPT. “Is it going to be more concerned with compliance and policing behaviors and trying to get out in front of cheating, without any evidence to support whether or not that’s actually going to happen? Or does it want to think about trust in students as its first reaction and building that trust into its response and its pedagogy?”

 

 

 

ChatGPT Could Be AI’s iPhone Moment — from bloomberg.com by Vlad Savov; with thanks to Dany DeGrave for his Tweet on this

Excerpt:

The thing is, a good toy has a huge advantage: People love to play with it, and the more they do, the quicker its designers can make it into something more. People are documenting their experiences with ChatGPT on Twitter, looking like giddy kids experimenting with something they’re not even sure they should be allowed to have. There’s humor, discovery and a game of figuring out the limitations of the system.

 


And on the legal side of things:


 

Police are rolling out new tech without knowing their effects on people — from The Algorithm by Melissa Heikkilä

Excerpt:

I got lucky—my encounter was with a drone in virtual reality as part of an experiment by a team from University College London and the London School of Economics. They’re studying how people react when meeting police drones, and whether they come away feeling more or less trusting of the police.

It seems obvious that encounters with police drones might not be pleasant. But police departments are adopting these sorts of technologies without even trying to find out.

“Nobody is even asking the question: Is this technology going to do more harm than good?” says Aziz Huq, a law professor at the University of Chicago, who is not involved in the research.

 

How AI will change Education: Part I | Transcend Newsletter #59 — from transcend.substack.com by Alberto Arenaza; with thanks to GSV’s Big 10 for this resource

Excerpt:

You’ve likely been reading for the last few minutes my arguments for why AI is going to change education. You may agree with some points, disagree with others…

Only, those were not my words.

An AI has written every single word in this essay up until here.

The only thing I wrote myself was the first sentence: Artificial Intelligence is going to revolutionize education. The images too, everything was generated by AI.

 

7 Technologies that are Changing Healthcare — from digitalsalutem.com by João Bocas

In this article we are going to talk about the seven technologies that are changing healthcare:

  1. Artificial Intelligence
  2. Blockchain
  3. Virtual Reality
  4. Robots
  5. Mapping technologies
  6. Big Data
  7. Neurotechnology

This startup 3D prints tiny homes from recyclable plastics — from interestingengineering.com by Nergis Firtina; with thanks to Laura Goodrich for this resource

A 3D printed house by Azure

Satellite Billboards Are a Dystopian Future We Don’t Need — from gizmodo.com by George Dvorsky; with thanks to Laura Goodrich for this resource
Brightly lit ads in orbit are technologically and economically viable, say Russian scientists. But can we not?

Artist’s conception of a cubesat ad showing the Olympic rings. Image: Shamil Biktimirov/Skoltech

South Korea to Provide Blockchain-based Digital Identities to Citizens by 2024 — from blockchain.news by Annie Li; with thanks to Laura Goodrich for this resource

Excerpt:

South Korea plans to provide digital identities encrypted by blockchain with smartphones to citizens in 2024 to facilitate its economic development., Bloomberg reported Monday.

The South Korean government stated that with the expansion of the digital economy, the ID embedded in the smartphone is an indispensable emerging technology to support the development of data.

From DSC:
Interesting to see blockchain show up in the first item above on healthcare and also on this item coming out of South Korea for digital identities.

The Bruce Willis Deepfake Is Everyone’s Problem — from wired.com by Will Bedingfield; with thanks to Stephen Downes for this resource
There’s a fight brewing over how Hollywood stars can protect their identities. But it’s not just actors who should be paying attention.

Excerpts:

Yet the question of “who owns Bruce Willis,” as Levy put it, isn’t only a concern for the Hollywood star and his representatives. It concerns actors unions across the world, fighting against contracts that exploit their members’ naivety about AI. And, for some experts, it’s a question that implicates everyone, portending a wilder, dystopian future—one in which identities are bought, sold, and seized.

“This is relevant not just to AI contracts [for synthetic performances], but any contract involving rights to one’s likeness and voice,” says Danielle S. Van Lier, assistant general counsel, intellectual property and contracts at SAG-AFTRA. “We have been seeing contracts that now include ‘simulation rights’ to performers’ images, voices, and performances. These contract terms are buried deep in the boilerplate of performance agreements in traditional media.”


Addendum on 10/26/22:


 

Threats uncovered: QR code exploits offer personal and business risks — from technative.io by Len Noe

Excerpts:

Cyber attackers have quickly caught onto QR codes as a social vulnerability and attacks using them as the vector are on the rise.

It’s clear we intuitively trust QR codes, even though this trust is poorly founded. To get a clearer picture of exactly how QR codes could present a threat, I did some digging. Through research, I discovered a variety of ways QR codes can be used maliciously, to steal not only personal information but provide a solid base of information from which to attack an organisation.

 

This Uncensored AI Art Tool Can Generate Fantasies—and Nightmares — from wired.com by Will Knight
Open source project Stable Diffusion allows anyone to conjure images with algorithms, but some fear it will be used to create unethical horrors.

Excerpt:

Image generators like Stable Diffusion can create what look like real photographs or hand-crafted illustrations depicting just about anything a person can imagine. This is possible thanks to algorithms that learn to associate the properties of a vast collection of images taken from the web and image databases with their associated text labels. Algorithms learn to render new images to match a text prompt in a process that involves adding and removing random noise to an image.

Also relevant/see:

There’s a text-to-image AI art app for Mac now—and it will change everything — from fastcompany.com by Jesus Diaz
Diffusion Bee harnesses the power of the open source text-to-image AI Stable Diffusion, turning it into a one-click Mac App. Brace yourself for a new creativity Big Bang.


Speaking of AI, also see:

 

Radar Trends to Watch: September 2022 Developments in AI, Privacy, Biology, and More — from oreilly.com by Mike Loukides

Excerpt:

It’s hardly news to talk about the AI developments of the last month. DALL-E is increasingly popular, and being used in production. Google has built a robot that incorporates a large language model so that it can respond to verbal requests. And we’ve seen a plausible argument that natural language models can be made to reflect human values, without raising the question of consciousness or sentience.

For the first time in a long time we’re talking about the Internet of Things. We’ve got a lot of robots, and Chicago is attempting to make a “smart city” that doesn’t facilitate surveillance. We’re also seeing a lot in biology. Can we make a real neural network from cultured neurons? The big question for biologists is how long it will take for any of their research to make it out of the lab.

 

You just hired a deepfake. Get ready for the rise of imposter employees. — from protocol.com by Mike Elgan
New technology — plus the pandemic remote work trend — is helping fraudsters use someone else’s identity to get a job.

Excerpt:

Companies have been increasingly complaining to the FBI about prospective employees using real-time deepfake video and deepfake audio for remote interviews, along with personally identifiable information (PII), to land jobs at American companies.

One place they’re likely getting the PII is through posting fake job openings, which enables them to harvest job candidate information, resumes and more, according to the FBI.

The main drivers appear to be money, espionage, access to company systems and unearned career advancement.

 

Forget the Jetsons. Transportation of the future will look more like ‘Westworld’ — from fastcompany.com
Futuristic public transportation projects are already in the works.

Excerpt:

THE NEXT GENERATION
The way we commute has already started to change. With next generation transportation projects, public transportation is becoming more efficient by employing self-driving buses and trains and installing automatic card-ticketing systems.

From DSC:
But we need to look out here. As we’ve seen before, not everything is so rosy with emerging technologies. See this next item for example:

Cruise’s Robot Car Outages Are Jamming Up San Francisco— from wired.com by Aarian Marshall
In a series of incidents, the GM subsidiary lost contact with its autonomous vehicles, leaving them frozen in traffic and trapping human drivers.

“A letter sent anonymously by a Cruise employee to the California Public Utilities Commission that month alleged that the company loses contact with its driverless vehicles ‘with regularity,’ blocking traffic and potentially hindering emergency vehicles.”

 

The Metaverse in 2040 — from pewresearch.org by Janna Anderson and Lee Rainie
Hype? Hope? Hell? Maybe all three. Experts are split about the likely evolution of a truly immersive ‘metaverse.’ They expect that augmented- and mixed-reality enhancements will become more useful in people’s daily lives. Many worry that current online problems may be magnified if Web3 development is led by those who built today’s dominant web platforms

 

The metaverse will, at its core, be a collection of new and extended technologies. It is easy to imagine that both the best and the worst aspects of our online lives will be extended by being able to tap into a more-complete immersive experience, by being inside a digital space instead of looking at one from the outside.

Laurence Lannom, vice president at the Corporation for National Research Initiatives

“Virtual, augmented and mixed reality are the gateway to phenomenal applications in medicine, education, manufacturing, retail, workforce training and more, and it is the gateway to deeply social and immersive interactions – the metaverse.

Elizabeth Hyman, CEO for the XR Association

 


 

The table of contents for the Metaverse in 2040 set of articles out at Pew Research dot org -- June 30, 2022

 


 
 

U.S. issues charges in first criminal cryptocurrency sanctions case — from washingtonpost.com by Spencer S. Hsu
Federal judge finds U.S. sanctions laws apply to $10 million in Bitcoin sent by American citizen to a country blacklisted by Washington

Excerpt:

The Justice Department has launched its first criminal prosecution involving the alleged use of cryptocurrency to evade U.S. economic sanctions, a federal judge disclosed Friday.

 

Ransomware is already out of control. AI-powered ransomware could be ‘terrifying.’ — from protocol.com by Kyle Alspach
Hiring AI experts to automate ransomware could be the next step for well-endowed ransomware groups that are seeking to scale up their attacks.

Excerpt:

In the perpetual battle between cybercriminals and defenders, the latter have always had one largely unchallenged advantage: The use of AI and machine learning allows them to automate a lot of what they do, especially around detecting and responding to attacks. This leg-up hasn’t been nearly enough to keep ransomware at bay, but it has still been far more than what cybercriminals have ever been able to muster in terms of AI and automation.

That’s because deploying AI-powered ransomware would require AI expertise. And the ransomware gangs don’t have it. At least not yet.

But given the wealth accumulated by a number of ransomware gangs in recent years, it may not be long before attackers do bring aboard AI experts of their own, prominent cybersecurity authority Mikko Hyppönen said.

Also re: AI, see:

Nuance partners with The Academy to launch The AI Collaborative — from artificialintelligence-news.com by Ryan Daws

Excerpt:

Nuance has partnered with The Health Management Academy (The Academy) to launch The AI Collaborative, an industry group focused on advancing healthcare using artificial intelligence and machine learning.

Nuance became a household name for creating the speech engine recognition engine behind Siri. In recent years, the company has put a strong focus on AI solutions for healthcare and is now a full-service partner of 77 percent of US hospitals and is trusted by over 500,000 physicians daily.

Inflection AI, led by LinkedIn and DeepMind co-founders, raises $225M to transform computer-human interactions — from techcrunch.com by Kyle Wiggers

Excerpts:

Inflection AI, the machine learning startup headed by LinkedIn co-founder Reid Hoffman and founding DeepMind member Mustafa Suleyman, has secured $225 million in equity financing, according to a filing with the U.S. Securities and Exchange Commission.

“[Programming languages, mice, and other interfaces] are ways we simplify our ideas and reduce their complexity and in some ways their creativity and their uniqueness in order to get a machine to do something,” Suleyman told the publication. “It feels like we’re on the cusp of being able to generate language to pretty much human-level performance. It opens up a whole new suite of things that we can do in the product space.”

 
© 2022 | Daniel Christian