The Adobe Firefly Video Model (beta) expands Adobe’s family of creative generative AI models and is the first publicly available video model designed to be safe for commercial use
Enhancements to Firefly models include 4x faster image generation and new capabilities integrated into Photoshop, Illustrator, Adobe Express and now Premiere Pro
Firefly has been used to generate 13 billion images since March 2023 and is seeing rapid adoption by leading brands and enterprises
Add sound to your video via text — Project Super Sonic:
New Dream Weaver — from aisecret.us Explore Adobe’s New Firefly Video Generative Model
Cybercriminals exploit voice cloning to impersonate individuals, including celebrities and authority figures, to commit fraud. They create urgency and trust to solicit money through deceptive means, often utilizing social media platforms for audio samples.
Today, I’m excited to share with you all the fruit of our effort at @OpenAI to create AI models capable of truly general reasoning: OpenAI’s new o1 model series! (aka ?) Let me explain ? 1/ pic.twitter.com/aVGAkb9kxV
We’ve developed a new series of AI models designed to spend more time thinking before they respond. Here is the latest news on o1 research, product and other updates.
The wait is over. OpenAI has just released GPT-5, now called OpenAI o1.
It brings advanced reasoning capabilities and can generate entire video games from a single prompt.
Think of it as ChatGPT evolving from fast, intuitive thinking (System-1) to deeper, more deliberate… pic.twitter.com/uAMihaUjol
OpenAI Strawberry (o1) is out! We are finally seeing the paradigm of inference-time scaling popularized and deployed in production. As Sutton said in the Bitter Lesson, there’re only 2 techniques that scale indefinitely with compute: learning & search. It’s time to shift focus to… pic.twitter.com/jTViQucwxr
The new AI model, called o1-preview (why are the AI companies so bad at names?), lets the AI “think through” a problem before solving it. This lets it address very hard problems that require planning and iteration, like novel math or science questions. In fact, it can now beat human PhD experts in solving extremely hard physics problems.
To be clear, o1-preview doesn’t do everything better. It is not a better writer than GPT-4o, for example. But for tasks that require planning, the changes are quite large.
What is the point of Super Realistic AI? — from Heather Cooper who runs Visually AI on Substack
The arrival of super realistic AI image generation, powered by models like Midjourney, FLUX.1, and Ideogram, is transforming the way we create and use visual content.
Recently, many creators (myself included) have been exploring super realistic AI more and more.
But where can this actually be used?
Super realistic AI image generation will have far-reaching implications across various industries and creative fields. Its importance stems from its ability to bridge the gap between imagination and visual representation, offering multiple opportunities for innovation and efficiency.
Today, we’re introducing Audio Overview, a new way to turn your documents into engaging audio discussions. With one click, two AI hosts start up a lively “deep dive” discussion based on your sources. They summarize your material, make connections between topics, and banter back and forth. You can even download the conversation and take it on the go.
Over the past several months, we’ve worked closely with the video editing community to advance the Firefly Video Model. Guided by their feedback and built with creators’ rights in mind, we’re developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage.
Just like our other Firefly generative AI models, editors can create with confidence knowing the Adobe Firefly Video Model is designed to be commercially safe and is only trained on content we have permission to use — never on Adobe users’ content.
We’re excited to share some of the incredible progress with you today — all of which is designed to be commercially safe and available in beta later this year. To be the first to hear the latest updates and get access, sign up for the waitlist here.
From DSC: Anyone who is involved in putting on conferences should at least be aware that this kind of thing is now possible!!! Check out the following posting from Adobe (with help from Tata Consultancy Services (TCS).
This year, the organizers — innovative industry event company Beyond Ordinary Events — turned to Tata Consultancy Services (TCS) to make the impossible “possible.” Leveraging Adobe generative AI technology across products like Adobe Premiere Pro and Acrobat, they distilled hours of video content in minutes, delivering timely dispatches to thousands of attendees throughout the conference.
…
For POSSIBLE ’24, Muche had an idea for a daily dispatch summarizing each day’s sessions so attendees wouldn’t miss a single insight. But timing would be critical. The dispatch needed to reach attendees shortly after sessions ended to fuel discussions over dinner and carry the excitement over to the next day.
The workflow started in Adobe Premiere Pro, with the writer opening a recording of each session and using the Speech to Text feature to automatically generate a transcript. They saved the transcript as a PDF file and opened it in Adobe Acrobat Pro. Then, using Adobe Acrobat AI Assistant, the writer asked for a session summary.
It was that fast and easy. In less than four minutes, one person turned a 30-minute session into an accurate, useful summary ready for review and publication.
By taking advantage of templates, the designer then added each AI-enabled summary to the newsletter in minutes. With just two people and generative AI technology, TCS accomplished the impossible — for the first time delivering an informative, polished newsletter to all 3,500 conference attendees just hours after the last session of the day.
RECLAIM: Use generative AI to speed up your daily tasks. Be ruthless. Anything that can be automated, should be.
PROTECT: This is the crucial step. That time you’ve saved? Protect it like it’s the last slice of pizza. Block it off in your calendar. Tell your team it’s sacred.
ELEVATE: Use this protected time for high-level thinking. Strategy. Innovation. The big, meaty problems you never have time for.
AMPLIFY: Here’s where it gets cool. Use generative AI to amp up your strategic thinking. Need to brainstorm solutions to a complex problem? Want to analyze market trends? Generative AI is your new thinking partner.
But amid the relentless onslaught of product launches, investment announcements, and hyped-up features, it’s worth asking: Which of these gen AI apps are people actually using? Which behaviors and categories are gaining traction among consumers? And which AI apps are people returning to, versus dabbling and dropping?
Welcome to the third installment of the Top 100 Gen AI Consumer Apps. .
Gen AI’s next inflection point: From employee experimentation to organizational transformation — from mckinsey.com by Charlotte Relyea, Dana Maor, and Sandra Durth with Jan Bouly As many employees adopt generative AI at work, companies struggle to follow suit. To capture value from current momentum, businesses must transform their processes, structures, and approach to talent.
To harness employees’ enthusiasm and stay ahead, companies need a holistic approach to transforming how the whole organization works with gen AI; the technology alone won’t create value.
Our research shows that early adopters prioritize talent and the human side of gen AI more than other companies (Exhibit 3). Our survey shows that nearly two-thirds of them have a clear view of their talent gaps and a strategy to close them, compared with just 25 percent of the experimenters. Early adopters focus heavily on upskilling and reskilling as a critical part of their talent strategies, as hiring alone isn’t enough to close gaps and outsourcing can hinder strategic-skills development. Finally, 40 percent of early-adopter respondents say their organizations provide extensive support to encourage employee adoption, versus 9 percent of experimenter respondents.
Adobe researchers have revealed an AI model that promises to transform photo editing by harnessing the power of video data. Dubbed “Magic Fixup,” this new technology automates complex image adjustments while preserving artistic intent, potentially reshaping workflows across multiple industries.
Magic Fixup’s core innovation lies in its unique approach to training data. Unlike previous models that relied solely on static images, Adobe’s system learns from millions of video frame pairs. This novel method allows the AI to understand the nuanced ways objects and scenes change under varying conditions of light, perspective, and motion.
Top AI tools people actually use — from heatherbcooper.substack.com by Heather Cooper How generative AI tools are changing the creative landscape
The shift toward creative tools Creative tools made up 52% of the top generative AI apps on the list. This seems to reflect a growing consumer demand for accessible creativity through AI with tools for image, music, speech, video, and editing.
Why it matters: Creative apps are gaining traction because they empower digital artists and content creators with AI-driven tools that simplify and enhance the creative process, making professional-level work more accessible than ever.
“Who to follow in AI” in 2024? [Part I] — from ai-supremacy.com by Michael Spencer [some of posting is behind a paywall] #1-20 [of 150] – I combed the internet, I found the best sources of AI insights, education and articles. LinkedIn | Newsletters | X | YouTube | Substack | Threads | Podcasts
AI In Medicine: 3 Future Scenarios From Utopia To Dystopia — from medicalfuturist.com by Andrea Koncz There’s a vast difference between baseless fantasizing and realistic forward planning. Structured methodologies help us learn how to “dream well”.
Key Takeaways
We’re often told that daydreaming and envisioning the future is a waste of time. But this notion is misguided.
We all instinctively plan for the future in small ways, like organizing a trip or preparing for a dinner party. This same principle can be applied to larger-scale issues, and smart planning does bring better results.
We show you a method that allows us to think “well” about the future on a larger scale so that it better meets our needs.
Latest Illustrator and Photoshop releases accelerate creative workflows, save pros time and empower designers to realize their visions faster
New Firefly-enabled features like Generative Shape Fill in Illustrator along with the Dimension Tool, Mockup, Text to Pattern, the Contextual Taskbar and performance enhancement tools accelerate productivity and free up time so creative pros can dive deeper into the parts of their work they love
Photoshop introduces all-new Selection Brush Tool and the general availability of Generate Image, Adjustment Brush Tool and other workflow enhancements empowering creators to make complex edits and unique designs .
Zoom in: Nike used genAI for ideation, including using a variety of prompts to produce images with different textures, materials and color to kick off the design process.
What they’re saying: “It’s a new way for us to work,” Nike lead footwear designer Juliana Sagat told Axios during a media tour of the showcase on Tuesday. .
Major companies are moving at high speed to capture the promises of artificial intelligence in healthcare while doctors and experts attempt to integrate the technology safely into patient care.
“Healthcare is probably the most impactful utility of generative AI that there will be,” Kimberly Powell, vice president of healthcare at AI hardware giant Nvidia (NVDA), which has partnered with Roche’s Genentech (RHHBY) to enhance drug discovery in the pharmaceutical industry, among other investments in healthcare companies, declared at the company’s AI Summit in June.
Mistral reignites this week’s LLM rivalry with Large 2 (source) — from superhuman.ai
Today, we are announcing Mistral Large 2, the new generation of our flagship model. Compared to its predecessor, Mistral Large 2 is significantly more capable in code generation, mathematics, and reasoning. It also provides a much stronger multilingual support, and advanced function calling capabilities.
Meta releases the biggest and best open-source AI model yet — from theverge.com by Alex Heath Llama 3.1 outperforms OpenAI and other rivals on certain benchmarks. Now, Mark Zuckerberg expects Meta’s AI assistant to surpass ChatGPT’s usage in the coming months.
Back in April, Meta teased that it was working on a first for the AI industry: an open-source model with performance that matched the best private models from companies like OpenAI.
Today, that model has arrived. Meta is releasing Llama 3.1, the largest-ever open-source AI model, which the company claims outperforms GPT-4o and Anthropic’s Claude 3.5 Sonnet on several benchmarks. It’s also making the Llama-based Meta AI assistant available in more countries and languages while adding a feature that can generate images based on someone’s specific likeness. CEO Mark Zuckerberg now predicts that Meta AI will be the most widely used assistant by the end of this year, surpassing ChatGPT.
4 ways to boost ChatGPT — from wondertools.substack.com by Jeremy Caplan & The PyCoach Simple tactics for getting useful responses
To help you make the most of ChatGPT, I’ve invited & edited today’s guest post from the author of a smart AI newsletter called The Artificial Corner. I appreciate how Frank Andrade pushes ChatGPT to produce better results with four simple, clever tactics. He offers practical examples to help us all use AI more effectively.
… Frank Andrade: Most of us fail to make the most of ChatGPT.
We omit examples in our prompts.
We fail to assign roles to ChatGPT to guide its behavior.
We let ChatGPT guess instead of providing it with clear guidance.
If you rely on vague prompts, learning how to create high-quality instructions will get you better results. It’s a skill often referred to as prompt engineering. Here are several techniques to get you to the next level.
According to Flighty, I logged more than 2,220 flight miles in the last 5 days traveling to three conferences to give keynotes and spend time with housing officers in Milwaukee, college presidents in Mackinac Island, MI, and enrollment and marketing leaders in Raleigh.
Before I rest, I wanted to post some quick thoughts about what I learned. Thank you to everyone who shared their wisdom these past few days:
We need to think about the “why” and “how” of AI in higher ed. The “why” shouldn’t be just because everyone else is doing it. Rather, the “why” is to reposition higher ed for a different future of competitors. The “how” shouldn’t be to just seek efficiency and cut jobs. Rather we should use AI to learn from its users to create a better experience going forward.
Residence halls are not just infrastructure. They are part and parcel of the student experience and critical to student success. Almost half of students living on campus say it increases their sense of belonging, according to research by the Association of College & University Housing Officers.
How do we extend the “residential experience”? More than half of traditional undergraduates who live on campus now take at least once course online. As students increasingly spend time off campus – or move off campus as early as their second year in college – we need to help continue to make the connections for them that they would in a dorm. Why? 47% of college students believe living in a college residence hall enhanced their ability to resolve conflicts.
Career must be at the core of the student experience for colleges to thrive in the future, says Andy Chan. Yes, some people might see that as too narrow of a view of higher ed or might not want to provide cogs for the wheel of the workforce, but without the job, none of the other benefits of college follow–citizenship, health, engagement.
A “triple threat grad”–someone who has an internship, a semester-long project, and an industry credential (think Salesforce or Adobe in addition to their degree–matters more in the job market than major or institution, says Brandon Busteed.
Every faculty member should think of themselves as an ambassador for the institution. Yes, care about their discipline/department, but that doesn’t survive if the rest of the institution falls down around them.
Presidents need to place bigger bets rather than spend pennies and dimes on a bunch of new strategies. That means to free up resources they need to stop doing things.
Higher ed needs a new business model. Institutions can’t make money just from tuition, and new products like certificates, are pennies on the dollars of degrees.
Boards aren’t ready for the future. They are over-indexed on philanthropy and alumni and not enough on the expertise needed for leading higher ed.
That’s the percentage of high school graduates going right on to college. A decade ago it was around 70%. So for all the bellyaching about the demographic cliff in higher ed, just imagine if today we were close to that 70% number? We’d be talking a few hundred thousand more students in the system.
As I told a gathering of presidents of small colleges and universities last night on Mackinac Island — the first time I had to take [numerous modes of transportation] to get to a conference — being small isn’t distinctive anymore.
There are many reasons undergrad enrollment is down, but they all come down to two interrelated trends: jobs and affordability.
The job has become so central to what students want out of the experience. It’s almost as if colleges now need to guarantee a job.
These institutions will need to rethink the learner relationship with work. Instead of college with work on the side, we might need to move to more of a mindset of work with college on the side by:
Making campus jobs more meaningful. Why can’t we have accounting and finance majors work in the CFO office, liberal arts majors work in IT on platforms such as Salesforce and Workday, which are skills needed in the workplace, etc.?
Apprenticeships are not just for the trades anymore. Integrate work-based learning into the undergrad experience in a much bigger way than internships and even co-ops.
Credentials within the degree. Every graduate should leave college with more than just a BA but also a certified credential in things like data viz, project management, the Adobe suite, Alteryx, etc.
The curriculum needs to be more flexible for students to combine work and learning — not only for the experience but also money for college — so more availability of online courses, hybrid courses, and flexible semesters.
From DSC: Last Thursday, I presented at the Educational Technology Organization of Michigan’s Spring 2024 Retreat. I wanted to pass along my slides to you all, in case they are helpful to you.
The artificial intelligence sector has never been more competitive. Forbes received some 1,900 submissions this year, more than double last year’s count. Applicants do not pay a fee to be considered and are judged for their business promise and technical usage of AI through a quantitative algorithm and qualitative judging panels. Companies are encouraged to share data on diversity, and our list aims to promote a more equitable startup ecosystem. But disparities remain sharp in the industry. Only 12 companies have women cofounders, five of whom serve as CEO, the same count as last year. For more, see our full package of coverage, including a detailed explanation of the list methodology, videos and analyses on trends in AI.
New Generative AI video tools coming to Premiere Pro this year will streamline workflows and unlock new creative possibilities, from extending a shot to adding or removing objects in a scene
Adobe is developing a video model for Firefly, which will power video and audio editing workflows in Premiere Pro and enable anyone to create and ideate
Adobe previews early explorations of bringing third-party generative AI models from OpenAI, Pika Labs and Runway directly into Premiere Pro, making it easy for customers to draw on the strengths of different models within the powerful workflows they use every day
AI-powered audio workflows in Premiere Pro are now generally available, making audio editing faster, easier and more intuitive
These key points underscore the importance of addressing challenges, enhancing policies, and leveraging AI technologies to create more inclusive opportunities for individuals with disabilities in the labor market.
The Adobe PDF (Portable Document Format) is one of the most popular formats for online documents. Put simply, if you need to download a tax form or review a company brochure, you’ll probably download a PDF to do so.
Unfortunately, many PDFs aren’t accessible for users with disabilities. A 2023 report from the Department of Justice (DOJ) found that only 20% of the government’s most-downloaded PDFs were conformant with federal accessibility standards. Private businesses also struggle to meet basic accessibility requirements.
The good news: If you think about accessibility when authoring your documents, you can provide a better experience for readers. Here’s how to get started.
What if a person with a visual impairment could receive audio assistance reading a map — and detailed instructions on how to navigate their local railway system? Or what if they could use image-to-text technology to quickly discern what’s in their fridge, along with recipe suggestions and a shopping list for their grocery delivery order?
AI-powered tools that do just that are now a reality thanks to Danish startup Be My Eyes, which uses the visual input capability of GPT-4 to create “virtual volunteers” for people who are blind or vision-impaired. It’s just one example of how advancements in AI are transforming the digital accessibility landscape.
The early vibrations of AI have already been shaking the newsroom. One downside of the new technology surfaced at CNET and Sports Illustrated, where editors let AI run amok with disastrous results. Elsewhere in news media, AI is already writing headlines, managing paywalls to increase subscriptions, performing transcriptions, turning stories in audio feeds, discovering emerging stories, fact checking, copy editing and more.
Felix M. Simon, a doctoral candidate at Oxford, recently published a white paper about AI’s journalistic future that eclipses many early studies. Swinging a bat from a crouch that is neither doomer nor Utopian, Simon heralds both the downsides and promise of AI’s introduction into the newsroom and the publisher’s suite.
Unlike earlier technological revolutions, AI is poised to change the business at every level. It will become — if it already isn’t — the beginning of most story assignments and will become, for some, the new assignment editor. Used effectively, it promises to make news more accurate and timely. Used frivolously, it will spawn an ocean of spam. Wherever the production and distribution of news can be automated or made “smarter,” AI will surely step up. But the future has not yet been written, Simon counsels. AI in the newsroom will be only as bad or good as its developers and users make it.
We proposed EMO, an expressive audio-driven portrait-video generation framework. Input a single reference image and the vocal audio, e.g. talking and singing, our method can generate vocal avatar videos with expressive facial expressions, and various head poses, meanwhile, we can generate videos with any duration depending on the length of input video.
New experimental work from Adobe Research is set to change how people create and edit custom audio and music. An early-stage generative AI music generation and editing tool, Project Music GenAI Control allows creators to generate music from text prompts, and then have fine-grained control to edit that audio for their precise needs.
“With Project Music GenAI Control, generative AI becomes your co-creator. It helps people craft music for their projects, whether they’re broadcasters, or podcasters, or anyone else who needs audio that’s just the right mood, tone, and length,” says Nicholas Bryan, Senior Research Scientist at Adobe Research and one of the creators of the technologies.
There’s a lot going on in the world of generative AI, but maybe the biggest is the increasing number of copyright lawsuits being filed against AI companies like OpenAI and Stability AI. So for this episode, we brought on Verge features editor Sarah Jeong, who’s a former lawyer just like me, and we’re going to talk about those cases and the main defense the AI companies are relying on in those copyright cases: an idea called fair use.
The FCC’s war on robocalls has gained a new weapon in its arsenal with the declaration of AI-generated voices as “artificial” and therefore definitely against the law when used in automated calling scams. It may not stop the flood of fake Joe Bidens that will almost certainly trouble our phones this election season, but it won’t hurt, either.
The new rule, contemplated for months and telegraphed last week, isn’t actually a new rule — the FCC can’t just invent them with no due process. Robocalls are just a new term for something largely already prohibited under the Telephone Consumer Protection Act: artificial and pre-recorded messages being sent out willy-nilly to every number in the phone book (something that still existed when they drafted the law).
EIEIO…Chips Ahoy!— from dashmedia.co by Michael Moe, Brent Peus, and Owen Ritz
Here Come the AI Worms — from wired.com by Matt Burgess Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.
Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.
New experimental work from Adobe Research is set to change how people create and edit custom audio and music. An early-stage generative AI music generation and editing tool, Project Music GenAI Control allows creators to generate music from text prompts, and then have fine-grained control to edit that audio for their precise needs.
SAN JOSE, Calif. – [On 2/20/23], Adobe (Nasdaq:ADBE) introduced AI Assistant in beta, a new generative AI-powered conversational engine in Reader and Acrobat.
…
Simply open Reader or Acrobat and start working with the new capabilities, including:
AI Assistant: AI Assistant recommends questions based on a PDF’s content and answers questions about what’s in the document – all through an intuitive conversational interface.
Generative summary: Get a quick understanding of the content inside long documents with short overviews in easy-to-read formats.
Intelligent citations: Adobe’s custom attribution engine and proprietary AI generate citations so customers can easily verify the source of AI Assistant’s answers.
Easy navigation:
Formatted output:
Respect for customer data:
Beyond PDF: Customers can use AI Assistant with all kinds of document formats (Word, PowerPoint, meeting transcripts, etc.)
Essential skills to thrive with Sora AI
The realm of video editing isn’t about cutting and splicing.
A Video Editor should learn a diverse set of skills to earn money, such as:
Prompt Writing
Software Mastery
Problem-solving skills
Collaboration and communication skills
Creative storytelling and visual aesthetics
Invest in those skills that give you a competitive edge.
The text file that runs the internet — from theverge.com by David Pierce For decades, robots.txt governed the behavior of web crawlers. But as unscrupulous AI companies seek out more and more data, the basic social contract of the web is falling apart.
The future of corporate learning and development (L&D) is being profoundly reshaped by the progress we are witnessing in artificial intelligence (AI). The increasing availability of new technologies and tools is causing L&D leaders and their teams to rethink their strategy and processes, and even their team structure. The resulting shift, already gaining momentum, will soon move us toward a future where learning experiences are deeply personal, interactive, and contextually rich.
The technological advancements at the forefront of this revolution:
Allow us to create high-quality content faster and at a fraction of the cost previously experienced.
Provide us with a range of new modalities of delivery, such as chat interfaces, as well as immersive and experiential simulations and games.
Enable us to transform learning and training more and more into a journey uniquely tailored to each individual’s learning path, strengths, weaknesses, and confidence levels.
We are already seeing signs of the immediate future—one where AI will adapt not only content but the entire learner experience, on-the-fly and aligned with the needs and requirements of the learner at a specific moment of need.
AI-assisted design & development work: A dramatic shift This prediction was right. There has been a seismic shift in instructional design, and the role is evolving toward content curation, editing, and resource orchestration. Critical thinking skills are becoming more important than ever to make sure that the final learning asset is accurate. All of this is happening thanks to AI tools like:
Adobe Firefly…
ChatGPT…
Another tool, one that isn’t usually part of the L&D ecosystem, is Microsoft’s Azure AI Services…
Early estimates indicate these improvements save between 30 percent and 60 percent of development time.
As a reminder, meta-learning, in this context, refers to tools that serve up experiences to learners based on their preferences, needs, and goals. It is the superstructure behind the content assets (e.g., programs, courses, articles, videos, etc.) that assembles everything into a coherent, and purposeful, body of knowledge for the users.
Today, I want to give my honest review of every AI art tool I used and why I love/hate some of them. I’ll highlight the best features they have and how the impact they had on me as an AI artist.
… Midjourney v4: The first AI art tool I loved
While Lensa had its moment and offered users the chance to turn their selfies into stylized AI art effortlessly, Midjourney v4 meant a world of new possibilities. You could create anything you wanted with a prompt!
Speaking of art and creativity, here are two other items to check out!
verb
(of artificial intelligence) to produce false information contrary to the intent of the user and present it as if true and factual. Example: When chatbots hallucinate, the result is often not just inaccurate but completely fabricated.
Soon, every employee will be both AI builder and AI consumer— from zdnet.com by Joe McKendrick, via Robert Gibson on LinkedIn “Standardized tools and platforms as well as advanced low- or no-code tech may enable all employees to become low-level engineers,” suggests a recent report.
The time could be ripe for a blurring of the lines between developers and end-users, a recent report out of Deloitte suggests. It makes more business sense to focus on bringing in citizen developers for ground-level programming, versus seeking superstar software engineers, the report’s authors argue, or — as they put it — “instead of transforming from a 1x to a 10x engineer, employees outside the tech division could be going from zero to one.”
Along these lines, see:
TECH TRENDS 2024 — from deloitte.com Six emerging technology trends demonstrate that in an age of generative machines, it’s more important than ever for organizations to maintain an integrated business strategy, a solid technology foundation, and a creative workforce.
The ruling follows a similar decision denying patent registrations naming AI as creators.
The UK Supreme Court ruled that AI cannot get patents, declaring it cannot be named as an inventor of new products because the law considers only humans or companies to be creators.
The New York Times sued OpenAI and Microsoft for copyright infringement on Wednesday, opening a new front in the increasingly intense legal battle over the unauthorized use of published work to train artificial intelligence technologies.
…
The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.
On this same topic, also see:
? The historic NYT v. @OpenAI lawsuit filed this morning, as broken down by me, an IP and AI lawyer, general counsel, and longtime tech person and enthusiast.
Tl;dr – It’s the best case yet alleging that generative AI is copyright infringement. Thread. ? pic.twitter.com/Zqbv3ekLWt
ChatGPT and Other Chatbots
The arrival of ChatGPT sparked tons of new AI tools and changed the way we thought about using a chatbot in our daily lives.
Chatbots like ChatGPT, Perplexity, Claude, and Bing Chat can help content creators by quickly generating ideas, outlines, drafts, and full pieces of content, allowing creators to produce more high-quality content in less time.
These AI tools boost efficiency and creativity in content production across formats like blog posts, social captions, newsletters, and more.
Microsoft is getting ready to upgrade its Surface lineup with new AI-enabled features, according to a report from Windows Central. Unnamed sources told the outlet the upcoming Surface Pro 10 and Surface Laptop 6 will come with a next-gen neural processing unit (NPU), along with Intel and Arm-based options.
With the AI-assisted reporter churning out bread and butter content, other reporters in the newsroom are freed up to go to court, meet a councillor for a coffee or attend a village fete, says the Worcester News editor, Stephanie Preece.
“AI can’t be at the scene of a crash, in court, in a council meeting, it can’t visit a grieving family or look somebody in the eye and tell that they’re lying. All it does is free up the reporters to do more of that,” she says. “Instead of shying away from it, or being scared of it, we are saying AI is here to stay – so how can we harness it?”
This year, I watched AI change the world in real time.
From what happened, I have no doubts that the coming years will be the most transformative period in the history of humankind.
Here’s the full timeline of AI in 2023 (January-December):
What to Expect in AI in 2024 — from hai.stanford.edu by Seven Stanford HAI faculty and fellows predict the biggest stories for next year in artificial intelligence.