Last week a behemoth of a paper was released by AI researchers in academia and industry on the ethics of advanced AI assistants.
It’s one of the most comprehensive and thoughtful papers on developing transformative AI capabilities in socially responsible ways that I’ve read in a while. And it’s essential reading for anyone developing and deploying AI-based systems that act as assistants or agents — including many of the AI apps and platforms that are currently being explored in business, government, and education.
The paper — The Ethics of Advanced AI Assistants— is written by 57 co-authors representing researchers at Google Deep Mind, Google Research, Jigsaw, and a number of prominent universities that include Edinburgh University, the University of Oxford, and Delft University of Technology. Coming in at 274 pages this is a massive piece of work. And as the authors persuasively argue, it’s a critically important one at this point in AI development.
Key questions for the ethical and societal analysis of advanced AI assistants include:
What is an advanced AI assistant? How does an AI assistant differ from other kinds of AI technology?
What capabilities would an advanced AI assistant have? How capable could these assistants be?
What is a good AI assistant? Are there certain values that we want advanced AI assistants to evidence across all contexts?
Are there limits on what AI assistants should be allowed to do? If so, how are these limits determined?
What should an AI assistant be aligned with? With user instructions, preferences, interests, values, well-being or something else?
What issues need to be addressed for AI assistants to be safe? What does safety mean for this class of technologies?
What new forms of persuasion might advanced AI assistants be capable of? How can we ensure that users remain appropriately in control of the technology?
How can people – especially vulnerable users – be protected from AI manipulation and unwanted disclosure of personal information?
Is anthropomorphism for AI assistants morally problematic? If so, might it still be permissible under certain conditions?
AI Resources and Teaching | Kent State University offers valuable resources for educators interested in incorporating artificial intelligence (AI) into their teaching practices. The university recognizes that the rapid emergence of AI tools presents both challenges and opportunities in higher education.
The AI Resources and Teaching page provides educators with information and guidance on various AI tools and their responsible use within and beyond the classroom. The page covers different areas of AI application, including language generation, visuals, videos, music, information extraction, quantitative analysis, and AI syllabus language examples.
For all its jaw-dropping power, Watson the computer overlord was a weak teacher. It couldn’t engage or motivate kids, inspire them to reach new heights or even keep them focused on the material — all qualities of the best mentors.
It’s a finding with some resonance to our current moment of AI-inspired doomscrolling about the future of humanity in a world of ascendant machines. “There are some things AI is actually very good for,” Nitta said, “but it’s not great as a replacement for humans.”
His five-year journey to essentially a dead-end could also prove instructive as ChatGPT and other programs like it fuel a renewed, multimillion-dollar experiment to, in essence, prove him wrong.
…
To be sure, AI can do sophisticated things such as generating quizzes from a class reading and editing student writing. But the idea that a machine or a chatbot can actually teach as a human can, he said, represents “a profound misunderstanding of what AI is actually capable of.”
Nitta, who still holds deep respect for the Watson lab, admits, “We missed something important. At the heart of education, at the heart of any learning, is engagement. And that’s kind of the Holy Grail.”
From DSC: This is why the vision that I’ve been tracking and working on has always said that HUMAN BEINGS will be necessary — they are key to realizing this vision. Along these lines, here’s a relevant quote:
Another crucial component of a new learning theory for the age of AI would be the cultivation of “blended intelligence.” This concept recognizes that the future of learning and work will involve the seamless integration of human and machine capabilities, and that learners must develop the skills and strategies needed to effectively collaborate with AI systems. Rather than viewing AI as a threat to human intelligence, a blended intelligence approach seeks to harness the complementary strengths of humans and machines, creating a symbiotic relationship that enhances the potential of both.
Per Alexander “Sasha” Sidorkin, Head of the National Institute on AI in Society at California State University Sacramento.
How Early Adopters of Gen AI Are Gaining Efficiencies — from knowledge.wharton.upenn.edu by Prasanna (Sonny) Tambe and Scott A. Snyder; via Ray Schroeder on LinkedIn Enterprises are seeing gains from generative AI in productivity and strategic planning, according to speakers at a recent Wharton conference.
Its unique strengths in translation, summation, and content generation are especially useful in processing unstructured data. Some 80% of all new data in enterprises is unstructured, he noted, citing research firm Gartner. Very little of that unstructured data that resides in places like emails “is used effectively at the point of decision making,” he noted. “[With gen AI], we have a real opportunity” to garner new insights from all the information that resides in emails, team communication platforms like Slack, and agile project management tools like Jira, he said.
Here are 6 YouTube channels I watch to stay up to date with AI. This list will be useful whether you’re a casual AI enthusiast or an experienced programmer.
1. Matt Wolfe: AI for non-coders
This is a fast-growing YouTube channel focused on artificial intelligence for non-coders. On this channel, you’ll find videos about ChatGPT, Midjourney, and any AI tool that it’s gaining popularity.
#3 Photomath
Photomath is a comprehensive math help app that provides step-by-step explanations for a wide range of math problems, from elementary to college level. Photomath is only available as a mobile app. (link)
Features:
Get step-by-step solutions with multiple methods to choose from
Scan any math problem, including word problems, using the app’s camera
Access custom visual aids and extra “how” and “why” tips for deeper understanding
Google researchers have developed a new artificial intelligence system that can generate lifelike videos of people speaking, gesturing and moving — from just a single still photo. The technology, called VLOGGER, relies on advanced machine learning models to synthesize startlingly realistic footage, opening up a range of potential applications while also raising concerns around deepfakes and misinformation.
I’m fascinated by the potential of these tools to augment and enhance our work and creativity. There’s no denying the impressive capabilities we’re already seeing with text generation, image creation, coding assistance, and more. Used thoughtfully, AI can be a powerful productivity multiplier.
At the same time, I have significant concerns about the broader implications of this accelerating technology, especially for education and society at large. We’re traversing new ground at a breakneck pace, and it’s crucial that we don’t blindly embrace AI without considering the potential risks.
My worry is that by automating away too many tasks, even seemingly rote ones like creating slide decks, we risk losing something vital—humanity at the heart of knowledge work.
Nvidia has announced a partnership with Hippocratic AI to introduce AI “agents” aimed at replacing nurses in hospitals. These AI “nurses” come at a significantly low cost compared to human nurses and are purportedly intended to address staffing issues by handling “low-risk,” patient-facing tasks via video calls. However, concerns are raised regarding the ethical implications and effectiveness of replacing human nurses with AI, particularly given the complex nature of medical care.
A glimpse of the future of AI at work:
I got early access to Devin, the “AI developer” – it is slow & breaks often, but you can start to see what an AI agent can do.
It makes a plan and executes it autonomously, doing research, writing code & debugging, without you watching. pic.twitter.com/HHBQQDQZ9q
As the FlexOS research study “Generative AI at Work” concluded based on a survey amongst knowledge workers, ChatGPT reigns supreme. … 2. AI Tool Usage is Way Higher Than People Expect – Beating Netflix, Pinterest, Twitch. As measured by data analysis platform Similarweb based on global web traffic tracking, the AI tools in this list generate over 3 billion monthly visits.
With 1.67 billion visits, ChatGPT represents over half of this traffic and is already bigger than Netflix, Microsoft, Pinterest, Twitch, and The New York Times.
Something unusual is happening in America. Demand for electricity, which has stayed largely flat for two decades, has begun to surge.
Over the past year, electric utilities have nearly doubled their forecasts of how much additional power they’ll need by 2028 as they confront an unexpected explosion in the number of data centers, an abrupt resurgence in manufacturing driven by new federal laws, and millions of electric vehicles being plugged in.
The tumult could seem like a distraction from the startup’s seemingly unending march toward AI advancement. But the tension, and the latest debate with Musk, illuminates a central question for OpenAI, along with the tech world at large as it’s increasingly consumed by artificial intelligence: Just how open should an AI company be?
…
The meaning of the word “open” in “OpenAI” seems to be a particular sticking point for both sides — something that you might think sounds, on the surface, pretty clear. But actual definitions are both complex and controversial.
In partnership with the National Cancer Institute, or NCI, researchers from the Department of Energy’s Oak Ridge National Laboratory and Louisiana State University developed a long-sequenced AI transformer capable of processing millions of pathology reports to provide experts researching cancer diagnoses and management with exponentially more accurate information on cancer reporting.
Vast swaths of the United States are at risk of running short of power as electricity-hungry data centers and clean-technology factories proliferate around the country, leaving utilities and regulators grasping for credible plans to expand the nation’s creaking power grid.
…
A major factor behind the skyrocketing demand is the rapid innovation in artificial intelligence, which is driving the construction of large warehouses of computing infrastructure that require exponentially more power than traditional data centers. AI is also part of a huge scale-up of cloud computing. Tech firms like Amazon, Apple, Google, Meta and Microsoft are scouring the nation for sites for new data centers, and many lesser-known firms are also on the hunt.
The Obscene Energy Demands of A.I.— from newyorker.com by Elizabeth Kolbert How can the world reach net zero if it keeps inventing new ways to consume energy?
“There’s a fundamental mismatch between this technology and environmental sustainability,” de Vries said. Recently, the world’s most prominent A.I. cheerleader, Sam Altman, the C.E.O. of OpenAI, voiced similar concerns, albeit with a different spin. “I think we still don’t appreciate the energy needs of this technology,” Altman said at a public appearance in Davos. He didn’t see how these needs could be met, he went on, “without a breakthrough.” He added, “We need fusion or we need, like, radically cheaper solar plus storage, or something, at massive scale—like, a scale that no one is really planning for.”
A generative AI reset: Rewiring to turn potential into value in 2024 — from mckinsey.com by Eric Lamarre, Alex Singla, Alexander Sukharevsky, and Rodney Zemmel; via Philippa Hardman The generative AI payoff may only come when companies do deeper organizational surgery on their business.
Figure out where gen AI copilots can give you a real competitive advantage
Upskill the talent you have but be clear about the gen-AI-specific skills you need
Form a centralized team to establish standards that enable responsible scaling
Set up the technology architecture to scale
Ensure data quality and focus on unstructured data to fuel your models
Build trust and reusability to drive adoption and scale
Since ChatGPT dropped in the fall of 2022, everyone and their donkey has tried their hand at prompt engineering—finding a clever way to phrase your query to a large language model (LLM) or AI art or video generator to get the best results or sidestep protections. The Internet is replete with prompt-engineering guides, cheat sheets, and advice threads to help you get the most out of an LLM.
…
However, new research suggests that prompt engineering is best done by the model itself, and not by a human engineer. This has cast doubt on prompt engineering’s future—and increased suspicions that a fair portion of prompt-engineering jobs may be a passing fad, at least as the field is currently imagined.
There is one very clear parallel between the digital spreadsheet and generative AI: both are computer apps that collapse time. A task that might have taken hours or days can suddenly be completed in seconds. So accept for a moment the premise that the digital spreadsheet has something to teach us about generative AI. What lessons should we absorb?
It’s that pace of change that gives me pause. Ethan Mollick, author of the forthcoming book Co-Intelligence, tells me “if progress on generative AI stops now, the spreadsheet is not a bad analogy”. We’d get some dramatic shifts in the workplace, a technology that broadly empowers workers and creates good new jobs, and everything would be fine. But is it going to stop any time soon? Mollick doubts that, and so do I.
Last month, OpenAI chief executive Sam Altman finally admitted what researchers have been saying for years — that the artificial intelligence (AI) industry is heading for an energy crisis. It’s an unusual admission. At the World Economic Forum’s annual meeting in Davos, Switzerland, Altman warned that the next wave of generative AI systems will consume vastly more power than expected, and that energy systems will struggle to cope. “There’s no way to get there without a breakthrough,” he said.
I’m glad he said it. I’ve seen consistent downplaying and denial about the AI industry’s environmental costs since I started publishing about them in 2018. Altman’s admission has got researchers, regulators and industry titans talking about the environmental impact of generative AI.
Yesterday, Nvidia reported $22.1 billion in revenue for its fourth fiscal quarter of fiscal 2024 (ending January 31, 2024), easily topping Wall Street’s expectations. The revenues grew 265% from a year ago, thanks to the explosive growth of generative AI.
…
He also repeated a notion about “sovereign AI.” This means that countries are protecting the data of their users and companies are protecting data of employees through “sovereign AI,” where the large-language models are contained within the borders of the country or the company for safety purposes.
??BREAKING: Adobe has created a new 50-person AI research org called CAVA (Co-Creation for Audio, Video, & Animation).
I can’t help but wonder if OpenAI’s Sora has been a wake up call for Adobe to formalize and accelerate their video and multimodal creation efforts?
Nvidia is building a new type of data centre called AI factory. Every company—biotech, self-driving, manufacturing, etc will need an AI factory.
Jensen is looking forward to foundational robotics and state space models. According to him, foundational robotics could have a breakthrough next year.
The crunch for Nvidia GPUs is here to stay. It won’t be able to catch up on supply this year. Probably not next year too.
A new generation of GPUs called Blackwell is coming out, and the performance of Blackwell is off the charts.
Nvidia’s business is now roughly 70% inference and 30% training, meaning AI is getting into users’ hands.
Text to video via OpenAI’s Sora. (I had taken this screenshot on the 15th, but am posting it now.)
We’re teaching AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.
Introducing Sora, our text-to-video model. Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.
At the University of Pennsylvania, undergraduate students in its school of engineering will soon be able to study for a bachelor of science degree in artificial intelligence.
What can one do with an AI degree? The University of Pennsylvania says students will be able to apply the skills they learn in school to build responsible AI tools, develop materials for emerging chips and hardware, and create AI-driven breakthroughs in healthcare through new antibiotics, among other things.
Google on Monday announced plans to help train people in Europe with skills in artificial intelligence, the latest tech giant to invest in preparing workers and economies amid the disruption brought on by technologies they are racing to develop.
The acceleration of AI deployments has gotten so absurdly out of hand that a draft post I started a week ago about a new development is now out of date.
… The Pace is Out of Control
A mere week since Ultra 1.0’s announcement, Google has now introduced us to Ultra 1.5, a model they are clearly positioning to be the leader in the field. Here is the full technical report for Gemini Ultra 1.5, and what it can do is stunning.
[St. Louis, MO, February 14, 2024] – In a bold move that counters the conventions of more traditional schools, Maryville University has unveiled a substantial $21 million multi-year investment in artificial intelligence (AI) and cutting-edge technologies. This groundbreaking initiative is set to transform the higher education experience to be powered by the latest technology to support student success and a five-star experience for thousands of students both on-campus and online.
The scammers used digitally recreated versions of an international company’s Chief Financial Officer and other employees to order $25 million in money transfers during a video conference call containing just one real person.
The victim, an employee at the Hong Kong branch of an unnamed multinational firm, was duped into taking part in a video conference call in which they were the only real person – the rest of the group were fake representations of real people, writes SCMP.
As we’ve seen in previous incidents where deepfakes were used to recreate someone without their permission, the scammers utilized publicly available video and audio footage to create these digital versions.
Since we launched Bard last year, people all over the world have used it to collaborate with AI in a completely new way — to prepare for job interviews, debug code, brainstorm new business ideas or, as we announced last week, create captivating images.
Our mission with Bard has always been to give you direct access to our AI models, and Gemini represents our most capable family of models. To reflect this, Bard will now simply be known as Gemini.
A new way to discover places with generative AI in Maps— from blog.google by Miriam Daniel; via AI Valley Here’s a look at how we’re bringing generative AI to Maps — rolling out this week to select Local Guides in the U.S.
Today, we’re introducing a new way to discover places with generative AI to help you do just that — no matter how specific, niche or broad your needs might be. Simply say what you’re looking for and our large-language models (LLMs) will analyze Maps’ detailed information about more than 250 million places and trusted insights from our community of over 300 million contributors to quickly make suggestions for where to go.
Starting in the U.S., this early access experiment launches this week to select Local Guides, who are some of the most active and passionate members of the Maps community. Their insights and valuable feedback will help us shape this feature so we can bring it to everyone over time.
Google Prepares for a Future Where Search Isn’t King — from wired.com by Lauren Goode CEO Sundar Pichai tells WIRED that Google’s new, more powerful Gemini chatbot is an experiment in offering users a way to get things done without a search engine. It’s also a direct shot at ChatGPT.
Rufus is an expert shopping assistant trained on Amazon’s product catalog and information from across the web to answer customer questions on shopping needs, products, and comparisons, make recommendations based on this context, and facilitate product discovery in the same Amazon shopping experience customers use regularly.
Launching [2/1/24] in beta to a small subset of customers in Amazon’s mobile app, Rufus will progressively roll out to additional U.S. customers in the coming weeks.
You’re a teacher who wants to integrate AI into your teaching. What do you do? I often get asked how should I start with AI in my school or University. This, I think, is one answer.
Continuity with teaching
One school has got this exactly right in my opinion. Meredith Joy Morris has implemented ChatGPT into the teaching process. The teacher does their thing and the chatbot picks up where the teacher stops, augmenting and scaling the teaching and learning process, passing the baton to the learners who carry on.This gives the learner a more personalised experience, encouraging independent learning by using the undoubted engagement that 1:1 dialogue provides.
There’s no way any teacher can provide this carry on support with even a handful of students, never mind a class of 30 or a course with 100. Teaching here is ‘extended’ and ‘scaled’ by AI. The feedback from the students was extremely positive.
The transition which AI forces me to make is no longer to evaluate writings, but to evaluate writers. I am accustomed to grading essays impersonally with an objective rubric, treating the text as distinct from the author and commenting only on the features of the text. I need to transition to evaluating students a bit more holistically, as philosophers – to follow along with them in the early stages of the writing process, to ask them to present their ideas orally in conversation or in front of their peers, to push them to develop the intellectual virtues that they will need if they are not going to be mastered by the algorithms seeking to manipulate them. That’s the sort of development I’ve meant to encourage all along, not paragraph construction and citation formatting. If my grading practices incentivize outsourcing to a machine intelligence, I need to change my grading practices.
[Bryan Alexander] There’s a crying need for faculty and staff professional development about generative AI. The topic is complicated and fast moving. Already the people I know who are seriously offering such support are massively overscheduled. Digital materials are popular. Books are lagging but will gradually surface. I hope we see more academics lead more professional development offerings.
For an academic institution to take emerging AI seriously it might have to set up a new body. Present organizational nodes are not necessarily a good fit.
When Satya Nitta worked at IBM, he and a team of colleagues took on a bold assignment: Use the latest in artificial intelligence to build a new kind of personal digital tutor.
This was before ChatGPT existed, and fewer people were talking about the wonders of AI. But Nitta was working with what was perhaps the highest-profile AI system at the time, IBM’s Watson. That AI tool had pulled off some big wins, including beating humans on the Jeopardy quiz show in 2011.
Nitta says he was optimistic that Watson could power a generalized tutor, but he knew the task would be extremely difficult. “I remember telling IBM top brass that this is going to be a 25-year journey,” he recently told EdSurge.
What are the advantages of AI in education? Canva’s study found 78 percent of teachers are interested in using AI education tools, but their experience with the technology remains limited, with 93 percent indicating they know “a little” or “nothing” about it – though this lack of experience hasn’t stopped teachers quickly discovering and considering its benefits:
60 percent of teachers agree it has given them ideas to boost student productivity
59 percent of teachers agree it has cultivated more ways for their students to be creative
56 percent of teachers agree it has made their lives easier
…
When looking at the ways teachers are already using generative artificial intelligence, the most common uses were:
Washington state issued new guidelines for K-12 public schools last week based on the principle of “embracing a human-centered approach to AI,” which also embraces the use of AI in the education process. The state’s Superintendent of Public Instruction, Chris Reykdal, commented in a letter accompanying the new guidelines:
Giving educators time back to invest in themselves and their students
Boost productivity and creativity with Duet AI: Educators can get fresh ideas and save time using generative AI across Workspace apps. With Duet AI, they can get help drafting lesson plans in Docs, creating images in Slides, building project plans in Sheets and more — all with control over their data.
Galaxy AI introduces meaningful intelligence aimed at enhancing every part of life, especially the phone’s most fundamental role: communication. When you need to defy language barriers, Galaxy S24 makes it easier than ever. Chat with another student or colleague from abroad. Book a reservation while on vacation in another country. It’s all possible with Live Translate,2 two-way, real-time voice and text translations of phone calls within the native app. No third-party apps are required, and on-device AI keeps conversations completely private.
With Interpreter, live conversations can be instantly translated on a split-screen view so people standing opposite each other can read a text transcription of what the other person has said. It even works without cellular data or Wi-Fi.
Galaxy S24 — from theneurondaily.com by Noah Edelman & Pete Huang
Samsung just announced the first truly AI-powered smartphone: the Galaxy S24.
For us AI power users, the features aren’t exactly new, but it’s the first time we’ve seen them packaged up into a smartphone (Siri doesn’t count, sorry).
OpenAI on Thursday announced its first partnership with a higher education institution.
Starting in February, Arizona State University will have full access to ChatGPT Enterprise and plans to use it for coursework, tutoring, research and more.
The partnership has been in the works for at least six months.
ASU plans to build a personalized AI tutor for students, allow students to create AI avatars for study help and broaden the university’s prompt engineering course.
The collaboration between ASU and OpenAI brings the advanced capabilities of ChatGPT Enterprise into higher education, setting a new precedent for how universities enhance learning, creativity and student outcomes.
“ASU recognizes that augmented and artificial intelligence systems are here to stay, and we are optimistic about their ability to become incredible tools that help students to learn, learn more quickly and understand subjects more thoroughly,” ASU President Michael M. Crow said. “Our collaboration with OpenAI reflects our philosophy and our commitment to participating directly to the responsible evolution of AI learning technologies.”
AI <> Academia — from drphilippahardman.substack.com by Dr. Philippa Hardman What might emerge from ASU’s pioneering partnership with OpenAI?
Phil’s Wish List #2: Smart Curriculum Development ChatGPT assists in creating and updating course curricula, based on both student data and emerging domain and pedagogical research on the topic.
Output: using AI it will be possible to review course content and make data-informed automate recommendations based on latest pedagogical and domain-specific research
Potential Impact: increased dynamism and relevance in course content and reduced administrative lift for academics.
Unlike traditional leadership, adaptable leadership is not bound by rigid rules and protocols. Instead, it thrives on flexibility. Adaptable leaders are willing to experiment, make course corrections, and pivot when necessary. Adaptable leadership is about flexibility, resilience and a willingness to embrace change. It embodies several key principles that redefine the role of leaders in organizations:
Embracing uncertainty
Adaptable leaders understand that uncertainty is the new norm. They do not shy away from ambiguity but instead, see it as an opportunity for growth and innovation. They encourage a culture of experimentation and learning from failure.
Empowering teams
Instead of dictating every move, adaptable leaders empower their teams to take ownership of their work. They foster an environment of trust and collaboration, enabling individuals to contribute their unique perspectives and skills.
Continuous learning
Adaptable leaders are lifelong learners. They are constantly seeking new knowledge, stay informed about industry trends and encourage their teams to do the same. They understand that knowledge is a dynamic asset that must be constantly updated.
Major AI in Education Related Developments this week — from stefanbauschard.substack.com by Stefan Bauschard ASU integrates with ChatGPT, K-12 AI integrations, Agents & the Rabbit, Uruguay, Meta and AGI, Rethinking curriculum
“The greatest risk is leaving school curriculum unchanged when the entire world is changing.” Hadi Partovi, founder Code.org, Angel investor in Facebook, DropBox, AirBnb, Uber
Tutorbots in college. On a more limited scale, Georgia State University, Morgan State University, and the University of Central Florida are piloting a project using chatbots to support students in foundational math and English courses.
Pioneering AI-Driven Instructional Design in Small College Settings — from campustechnology.com by Gopu Kiron For institutions that lack the budget or staff expertise to utilize instructional design principles in online course development, generative AI may offer a way forward.
Unfortunately, smaller colleges — arguably the institutions whose students are likely to benefit the most from ID enhancements — frequently find themselves excluded from authentically engaging in the ID arena due to tight budgets, limited faculty online course design expertise, and the lack of ID-specific staff roles. Despite this, recent developments in generative AI may offer these institutions a low-cost, tactical avenue to compete with more established players.
There’s a new AI from Google DeepMind called AlphaGeometry that totally nails solving super hard geometry problems. We’re talking problems so tough only math geniuses who compete in the International Mathematical Olympiad can figure them out.
verb
(of artificial intelligence) to produce false information contrary to the intent of the user and present it as if true and factual. Example: When chatbots hallucinate, the result is often not just inaccurate but completely fabricated.
Soon, every employee will be both AI builder and AI consumer— from zdnet.com by Joe McKendrick, via Robert Gibson on LinkedIn “Standardized tools and platforms as well as advanced low- or no-code tech may enable all employees to become low-level engineers,” suggests a recent report.
The time could be ripe for a blurring of the lines between developers and end-users, a recent report out of Deloitte suggests. It makes more business sense to focus on bringing in citizen developers for ground-level programming, versus seeking superstar software engineers, the report’s authors argue, or — as they put it — “instead of transforming from a 1x to a 10x engineer, employees outside the tech division could be going from zero to one.”
Along these lines, see:
TECH TRENDS 2024 — from deloitte.com Six emerging technology trends demonstrate that in an age of generative machines, it’s more important than ever for organizations to maintain an integrated business strategy, a solid technology foundation, and a creative workforce.
The ruling follows a similar decision denying patent registrations naming AI as creators.
The UK Supreme Court ruled that AI cannot get patents, declaring it cannot be named as an inventor of new products because the law considers only humans or companies to be creators.
The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.
…
The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.
On this same topic, also see:
? The historic NYT v. @OpenAI lawsuit filed this morning, as broken down by me, an IP and AI lawyer, general counsel, and longtime tech person and enthusiast.
Tl;dr – It’s the best case yet alleging that generative AI is copyright infringement. Thread. ? pic.twitter.com/Zqbv3ekLWt
ChatGPT and Other Chatbots
The arrival of ChatGPT sparked tons of new AI tools and changed the way we thought about using a chatbot in our daily lives.
Chatbots like ChatGPT, Perplexity, Claude, and Bing Chat can help content creators by quickly generating ideas, outlines, drafts, and full pieces of content, allowing creators to produce more high-quality content in less time.
These AI tools boost efficiency and creativity in content production across formats like blog posts, social captions, newsletters, and more.
Microsoft is getting ready to upgrade its Surface lineup with new AI-enabled features, according to a report from Windows Central. Unnamed sources told the outlet the upcoming Surface Pro 10 and Surface Laptop 6 will come with a next-gen neural processing unit (NPU), along with Intel and Arm-based options.
With the AI-assisted reporter churning out bread and butter content, other reporters in the newsroom are freed up to go to court, meet a councillor for a coffee or attend a village fete, says the Worcester News editor, Stephanie Preece.
“AI can’t be at the scene of a crash, in court, in a council meeting, it can’t visit a grieving family or look somebody in the eye and tell that they’re lying. All it does is free up the reporters to do more of that,” she says. “Instead of shying away from it, or being scared of it, we are saying AI is here to stay – so how can we harness it?”
This year, I watched AI change the world in real time.
From what happened, I have no doubts that the coming years will be the most transformative period in the history of humankind.
Here’s the full timeline of AI in 2023 (January-December):
What to Expect in AI in 2024 — from hai.stanford.edu by Seven Stanford HAI faculty and fellows predict the biggest stories for next year in artificial intelligence.
Artificial intelligence is automating the creation of fake news, spurring an explosion of web content mimicking factual articles that instead disseminates false information about elections, wars and natural disasters.
Since May, websites hosting AI-created false articles have increased by more than 1,000 percent, ballooning from 49 sites to more than 600, according to NewsGuard, an organization that tracks misinformation.
Historically, propaganda operations have relied on armies of low-paid workers or highly coordinated intelligence organizations to build sites that appear to be legitimate. But AI is making it easy for nearly anyone — whether they are part of a spy agency or just a teenager in their basement — to create these outlets, producing content that is at times hard to differentiate from real news.
Their AI chatbot, designed to assist customers in their vehicle search, became a social media sensation for all the wrong reasons. One user even convinced the chatbot to agree to sell a 2024 Chevy Tahoe for just one dollar!
This story is exactly why AI implementation needs to be approached strategically. Learning to use AI, also means learning to build thinking of the guardrails and boundaries.
The pharmacy chain Rite Aid misused facial recognition technology in a way that subjected shoppers to unfair searches and humiliation, the Federal Trade Commission said Tuesday, part of a landmark settlement that could raise questions about the technology’s use in stores, airports and other venues nationwide.
…
But the chain’s “reckless” failure to adopt safeguards, coupled with the technology’s long history of inaccurate matches and racial biases, ultimately led store employees to falsely accuse shoppers of theft, leading to “embarrassment, harassment, and other harm” in front of their family members, co-workers and friends, the FTC said in a statement.