From DSC: I opened up a BRAND NEW box of cereal from Post the other day. As I looked down into the package, I realized that it was roughly half full. (This has happened many times before, but it struck me so much this time that I had to take pictures of it and post this item.) .
. Looks can be deceiving for sure. It looks like I should have been getting a full box of cereal…but no…only about half of the package was full. It’s another example of the shrinkflation of things — which can also be described as people deceptively ripping other people off.
“As long as I’m earning $$, I don’t care how it impacts others.”<– That’s not me talking, but it’s increasingly the perspective that many Americans have these days. We don’t bother with ethics and morals…how old-fashioned can you get, right? We just want to make as much money as possible and to hell with how our actions/products are impacting others.
Another example from the food industry is one of the companies that I worked for in the 1990’s — Kraft Foods. Kraft has not served peoples’ health well at all. Even when they tried to take noble steps to provide healthier foods, other food executives/companies in the industry wouldn’t hop on board. They just wanted to please Wall Street, not Main Street. So companies like Kraft have contributed to the current situations that we face which involve obesity, diabetes, heart attacks, and other ailments. (Not to mention increased health care costs.)
Bottom line reflection: There are REAL ramifications when we don’t take Christ’s words/commands to love one another seriously (or even to care about someone at all). We’re experiencing such ramifications EVERY DAY now.
Robot “Jailbreaks” In the year or so since large language models hit the big time, researchers have demonstrated numerous ways of tricking them into producing problematic outputs including hateful jokes, malicious code, phishing emails, and the personal information of users. It turns out that misbehavior can take place in the physical world, too: LLM-powered robots can easily be hacked so that they behave in potentially dangerous ways.
Researchers from the University of Pennsylvania were able to persuade a simulated self-driving car to ignore stop signs and even drive off a bridge, get a wheeled robot to find the best place to detonate a bomb, and force a four-legged robot to spy on people and enter restricted areas.
“We view our attack not just as an attack on robots,” says George Pappas, head of a research lab at the University of Pennsylvania who helped unleash the rebellious robots. “Any time you connect LLMs and foundation models to the physical world, you actually can convert harmful text into harmful actions.”
…
The robot “jailbreaks” highlight a broader risk that is likely to grow as AI models become increasingly used as a way for humans to interact with physical systems, or to enable AI agents autonomously on computers, say the researchers involved.
In an effort to automate scientific discovery using artificial intelligence (AI), researchers have created a virtual laboratory that combines several ‘AI scientists’ — large language models with defined scientific roles — that can collaborate to achieve goals set by human researchers.
The system, described in a preprint posted on bioRxiv last month1, was able to design antibody fragments called nanobodies that can bind to the virus that causes COVID-19, proposing nearly 100 of these structures in a fraction of the time it would take an all-human research group.
By embracing an agent-first approach, every CIO can redefine their business operations. AI agents are now the number one choice for CIOs as they come pre-built and can generate responses that are consistent with a company’s brand using trusted business data, explains Thierry Nicault at Salesforce Middle.
The system generates full 3D environments that expand beyond what’s visible in the original image, allowing users to explore new perspectives.
Users can freely navigate and view the generated space with standard keyboard and mouse controls, similar to browsing a website.
It includes real-time camera effects like depth-of-field and dolly zoom, as well as interactive lighting and animation sliders to tweak scenes.
The system works with both photos and AI-generated images, enabling creators to integrate it with text-to-image tools or even famous works of art.
Why it matters:
This technology opens up exciting possibilities for industries like gaming, film, and virtual experiences. Soon, creating fully immersive worlds could be as simple as generating a static image.
Today we’re sharing our first step towards spatial intelligence: an AI system that generates 3D worlds from a single image. This lets you step into any image and explore it in 3D.
Most GenAI tools make 2D content like images or videos. Generating in 3D instead improves control and consistency. This will change how we make movies, games, simulators, and other digital manifestations of our physical world.
In this post you’ll explore our generated worlds, rendered live in your browser. You’ll also experience different camera effects, 3D effects, and dive into classic paintings. Finally, you’ll see how creators are already building with our models.
Addendum on 12/5/24:
ChatGPT turns two: how has it impacted markets? — from moneyweek.com Two years on from ChatGPT’s explosive launch into the public sphere, we assess the impact that it has had on stock markets and the world of technology
Risks on the Horizon: ASL Levels The two key risks Dario is concerned about are:
a) cyber, bio, radiological, nuclear (CBRN)
b) model autonomy
These risks are captured in Anthropic’s framework for understanding AI Safety Levels (ASL):
1. ASL-1: Narrow-task AI like Deep Blue (no autonomy, minimal risk).
2. ASL-2: Current systems like ChatGPT/Claude, which lack autonomy and don’t pose significant risks beyond information already accessible via search engines.
3. ASL-3: Agents arriving soon (potentially next year) that can meaningfully assist non-state actors in dangerous activities like cyber or CBRN (chemical, biological, radiological, nuclear) attacks. Security and filtering are critical at this stage to prevent misuse.
4. ASL-4: AI smart enough to evade detection, deceive testers, and assist state actors with dangerous projects. AI will be strong enough that you would want to use the model to do anything dangerous. Mechanistic interpretability becomes crucial for verifying AI behavior.
5. ASL-5: AGI surpassing human intelligence in all domains, posing unprecedented challenges.
Anthropic’s if/then framework ensures proactive responses: if a model demonstrates danger, the team clamps down hard, enforcing strict controls.
Should You Still Learn to Code in an A.I. World? — from nytimes.com by Coding boot camps once looked like the golden ticket to an economically secure future. But as that promise fades, what should you do? Keep learning, until further notice.
Compared with five years ago, the number of active job postings for software developers has dropped 56 percent, according to data compiled by CompTIA. For inexperienced developers, the plunge is an even worse 67 percent.
“I would say this is the worst environment for entry-level jobs in tech, period, that I’ve seen in 25 years,” said Venky Ganesan, a partner at the venture capital firm Menlo Ventures.
For years, the career advice from everyone who mattered — the Apple chief executive Tim Cook, your mother — was “learn to code.” It felt like an immutable equation: Coding skills + hard work = job.
There’s a new coding startup in town, and it just MIGHT have everybody else shaking in their boots (we’ll qualify that in a sec, don’t worry).
It’s called Lovable, the “world’s first AI fullstack engineer.”
… Lovable does all of that by itself. Tell it what you want to build in plain English, and it creates everything you need. Want users to be able to log in? One click. Need to store data? One click. Want to accept payments? You get the idea.
Early users are backing up these claims. One person even launched a startup that made Product Hunt’s top 10 using just Lovable.
As for us, we made a Wordle clone in 2 minutes with one prompt. Only edit needed? More words in the dictionary. It’s like, really easy y’all.
From DSC: I have to admit I’m a bit suspicious here, as the “conversation practice” product seems a bit too scripted at times, but I post it because the idea of using AI to practice soft skills development makes a great deal of sense:
This is mind-blowing!
NVIDIA has introduced Edify 3D, a 3D AI generator that lets us create high-quality 3D scenes using just a simple prompt. And all the assets are fully editable!
Nearly every Fortune 500 company now uses artificial intelligence (AI) to screen resumes and assess test scores to find the best talent. However, new research from the University of Florida suggests these AI tools might not be delivering the results hiring managers expect.
The problem stems from a simple miscommunication between humans and machines: AI thinks it’s picking someone to hire, but hiring managers only want a list of candidates to interview.
Without knowing about this next step, the AI might choose safe candidates. But if it knows there will be another round of screening, it might suggest different and potentially stronger candidates.
In the last two years, the world has seen a lot of breakneck advancement in the Generative AI space, right from text-to-text, text-to-image and text-to-video based Generative AI capabilities. And all of that’s been nothing short of stepping stones for the next big AI breakthrough – AI agents. According to Bloomberg, OpenAI is preparing to launch its first autonomous AI agent, which is codenamed ‘Operator,’ as soon as in January 2025.
Apparently, this OpenAI agent – or Operator, as it’s codenamed – is designed to perform complex tasks independently. By understanding user commands through voice or text, this AI agent will seemingly do tasks related to controlling different applications in the computer, send an email, book flights, and no doubt other cool things. Stuff that ChatGPT, Copilot, Google Gemini or any other LLM-based chatbot just can’t do on its own.
In the enterprise of the future, human workers are expected to work closely alongside sophisticated teams of AI agents.
According to McKinsey, generative AI and other technologies have the potential to automate 60 to 70% of employees’ work. And, already, an estimated one-third of American workers are using AI in the workplace — oftentimes unbeknownst to their employers.
However, experts predict that 2025 will be the year that these so-called “invisible” AI agents begin to come out of the shadows and take more of an active role in enterprise operations.
“Agents will likely fit into enterprise workflows much like specialized members of any given team,” said Naveen Rao, VP of AI at Databricks and founder and former CEO of MosaicAI.
A recent report from McKinsey predicts that generative AI could unlock up to $2.6 to $4.4 annually trillion in value within product development and innovation across various industries. This staggering figure highlights just how significantly generative AI is set to transform the landscape of product development. Generative AI app development is driving innovation by using the power of advanced algorithms to generate new ideas, optimize designs, and personalize products at scale. It is also becoming a cornerstone of competitive advantage in today’s fast-paced market. As businesses look to stay ahead, understanding and integrating technologies like generative AI app development into product development processes is becoming more crucial than ever.
AI agents handle complex, autonomous tasks beyond simple commands, showcasing advanced decision-making and adaptability.
The Based AI Agent template by Coinbase and Replit provides an easy starting point for developers to build blockchain-enabled AI agents.
AI based agents specifically integrate with blockchain, supporting crypto wallets and transactions.
Securing API keys in development is crucial to protect the agent from unauthorized access.
What are AI Agents and How Are They Used in Different Industries?— from rtinsights.com by Salvatore Salamone AI agents enable companies to make smarter, faster, and more informed decisions. From predictive maintenance to real-time process optimization, these agents are delivering tangible benefits across industries.
Being a College Athlete Now Means Constant Travel and Missed Classes— from nytimes.com by Billy Witz; via Ryan Craig Players are dealing with far-flung travel, jet lag and the pressures of trying to balance the roles of student, athlete and entrepreneur more than ever before.
Playing football this season for the U.C.L.A. Bruins means being a frequent (and distant) flier. The team began the campaign in August with a win at the University of Hawaii. Their next road games sent the Bruins to Louisiana State, then Penn State, and back across the country to Rutgers. Then, a trip to Nebraska on Saturday and a jaunt up to Washington.
Such is the life of the modern-day college athlete, with U.C.L.A. moving into the Big Ten Conference, the erstwhile standard-bearer for Midwest football that now stretches from Piscataway to Puget Sound.
In all, the Bruins will travel 22,226 miles this season — nearly enough to circumnavigate the globe. It is the equivalent of 33 round trips to the Bay Area to play Stanford or U.C. Berkeley, U.C.L.A.’s former rivals that have moved to a newly bicoastal league of their own.
Longer trips for games, extra missed classes and the effects of jet lag are heaping additional pressure on young adults trying to balance the roles of student, athlete and — in an age when they can cash in on their fame — entrepreneur.
…
The U.S.C. women’s volleyball team, which has four midweek road games, is likely to miss at least 12 days of classes.
If you’re a teen, you could be exposed to conspiracy theories and a host of other pieces of misinformation as frequently as every day while scrolling through your social media feeds.
That’s according to a new study by the News Literacy Project, which also found that teens struggle with identifying false information online. This comes at a time when media literacy education isn’t available to most students, the report finds, and their ability to distinguish between objective and biased information sources is weak. The findings are based on responses from more than 1,000 teens ages 13 to 18.
“News literacy is fundamental to preparing students to become active, critically thinking members of our civic life — which should be one of the primary goals of a public education,” Kim Bowman, News Literacy Project senior research manager and author of the report, said in an email interview. “If we don’t teach young people the skills they need to evaluate information, they will be left at a civic and personal disadvantage their entire lives. News literacy instruction is as important as core subjects like reading and math.”
To help teach your students about news and media literacy, I highly recommend my sister Sue Ellen Christian’s work out atWonder Media.
.
.
.
There you will find numerous resources for libraries, schools, families and individuals. Suggestions of books, articles, other websites, and online materials to assist you in growing your media literacy and news media literacy are also included there.
Google’s worst nightmare just became reality. OpenAI didn’t just add search to ChatGPT – they’ve launched an all-out assault on traditional search engines.
It’s the beginning of the end for search as we know it.
Let’s be clear about what’s happening: OpenAI is fundamentally changing how we’ll interact with information online. While Google has spent 25 years optimizing for ad revenue and delivering pages of blue links, OpenAI is building what users actually need – instant, synthesized answers from current sources.
The rollout is calculated and aggressive: ChatGPT Plus and Team subscribers get immediate access, followed by Enterprise and Education users in weeks, and free users in the coming months. This staged approach is about systematically dismantling Google’s search dominance.
Open for AI: India Tech Leaders Build AI Factories for Economic Transformation — from blogs.nvidia.com Yotta Data Services, Tata Communications, E2E Networks and Netweb are among the providers building and offering NVIDIA-accelerated infrastructure and software, with deployments expected to double by year’s end.
35 For I was hungry and you gave me something to eat, I was thirsty and you gave me something to drink, I was a stranger and you invited me in, 36 I needed clothes and you clothed me, I was sick and you looked after me, I was in prison and you came to visit me.’
37 “Then the righteous will answer him, ‘Lord, when did we see you hungry and feed you, or thirsty and give you something to drink? 38 When did we see you a stranger and invite you in, or needing clothes and clothe you? 39 When did we see you sick or in prison and go to visit you?’
40 “The King will reply, ‘Truly I tell you, whatever you did for one of the least of these brothers and sisters of mine, you did for me.’
12 For the word of God is alive and active. Sharper than any double-edged sword, it penetrates even to dividing soul and spirit, joints and marrow; it judges the thoughts and attitudes of the heart.
34 Then Peter began to speak: “I now realize how true it is that God does not show favoritism35 but accepts from every nation the one who fears him and does what is right.
SALT LAKE CITY, Oct. 22, 2024 /PRNewswire/ — Instructure, the leading learning ecosystem and UPCEA, the online and professional education association, announced the results of a survey on whether institutions are leveraging AI to improve learner outcomes and manage records, along with the specific ways these tools are being utilized. Overall, the study revealed interest in the potential of these technologies is far outpacing adoption. Most respondents are heavily involved in developing learner experiences and tracking outcomes, though nearly half report their institutions have yet to adopt AI-driven tools for these purposes. The research also found that only three percent of institutions have implemented Comprehensive Learner Records (CLRs), which provide a complete overview of an individual’s lifelong learning experiences.
In the nearly two years since generative artificial intelligence burst into public consciousness, U.S. schools of education have not kept pace with the rapid changes in the field, a new report suggests.
Only a handful of teacher training programs are moving quickly enough to equip new K-12 teachers with a grasp of AI fundamentals — and fewer still are helping future teachers grapple with larger issues of ethics and what students need to know to thrive in an economy dominated by the technology.
The report, from the Center on Reinventing Public Education, a think tank at Arizona State University, tapped leaders at more than 500 U.S. education schools, asking how their faculty and preservice teachers are learning about AI. Through surveys and interviews, researchers found that just one in four institutions now incorporates training on innovative teaching methods that use AI. Most lack policies on using AI tools, suggesting that they probably won’t be ready to teach future educators about the intricacies of the field anytime soon.
It is bonkers that I can write out all my life goals on a sheet of paper, take a photo of it, and just ask Claude or ChatGPT for help.
I get a complete plan, milestones, KPIs, motivation, and even action support to get there.
As beta testers, we’re shaping the tools of tomorrow. As researchers, we’re pioneering new pedagogical approaches. As ethical guardians, we’re ensuring that AI enhances rather than compromises the educational experience. As curators, we’re guiding students through the wealth of information AI provides. And as learners ourselves, we’re staying at the forefront of educational innovation.
In a groundbreaking study, researchers from Penn Engineering showed how AI-powered robots can be manipulated to ignore safety protocols, allowing them to perform harmful actions despite normally rejecting dangerous task requests.
What did they find ?
Researchers found previously unknown security vulnerabilities in AI-governed robots and are working to address these issues to ensure the safe use of large language models(LLMs) in robotics.
Their newly developed algorithm, RoboPAIR, reportedly achieved a 100% jailbreak rate by bypassing the safety protocols on three different AI robotic systems in a few days.
Using RoboPAIR, researchers were able to manipulate test robots into performing harmful actions, like bomb detonation and blocking emergency exits, simply by changing how they phrased their commands.
Why does it matter?
This research highlights the importance of spotting weaknesses in AI systems to improve their safety, allowing us to test and train them to prevent potential harm.
From DSC: Great! Just what we wanted to hear. But does it surprise anyone? Even so…we move forward at warp speeds.
From DSC:
So, given the above item, does the next item make you a bit nervous as well? I saw someone on Twitter/X exclaim, “What could go wrong?” I can’t say I didn’t feel the same way.
We’re also introducing a groundbreaking new capability in public beta: computer use.Available today on the API, developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental—at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time.
Per The Rundown AI:
The Rundown: Anthropic just introduced a new capability called ‘computer use’, alongside upgraded versions of its AI models, which enables Claude to interact with computers by viewing screens, typing, moving cursors, and executing commands.
… Why it matters: While many hoped for Opus 3.5, Anthropic’s Sonnet and Haiku upgrades pack a serious punch. Plus, with the new computer use embedded right into its foundation models, Anthropic just sent a warning shot to tons of automation startups—even if the capabilities aren’t earth-shattering… yet.
Also related/see:
What is Anthropic’s AI Computer Use? — from ai-supremacy.com by Michael Spencer Task automation, AI at the intersection of coding and AI agents take on new frenzied importance heading into 2025 for the commercialization of Generative AI.
New Claude, Who Dis? — from theneurondaily.com Anthropic just dropped two new Claude models…oh, and Claude can now use your computer.
What makes Act-One special? It can capture the soul of an actor’s performance using nothing but a simple video recording. No fancy motion capture equipment, no complex face rigging, no army of animators required. Just point a camera at someone acting, and watch as their exact expressions, micro-movements, and emotional nuances get transferred to an AI-generated character.
Think about what this means for creators: you could shoot an entire movie with multiple characters using just one actor and a basic camera setup. The same performance can drive characters with completely different proportions and looks, while maintaining the authentic emotional delivery of the original performance. We’re witnessing the democratization of animation tools that used to require millions in budget and years of specialized training.
Also related/see:
Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.
Google has signed a “world first” deal to buy energy from a fleet of mini nuclear reactors to generate the power needed for the rise in use of artificial intelligence.
The US tech corporation has ordered six or seven small nuclear reactors (SMRs) from California’s Kairos Power, with the first due to be completed by 2030 and the remainder by 2035.
After the extreme peak and summer slump of 2023, ChatGPT has been setting new traffic highs since May
ChatGPT has been topping its web traffic records for months now, with September 2024 traffic up 112% year-over-year (YoY) to 3.1 billion visits, according to Similarweb estimates. That’s a change from last year, when traffic to the site went through a boom-and-bust cycle.
Google has made a historic agreement to buy energy from a group of small nuclear reactors (SMRs) from Kairos Power in California. This is the first nuclear power deal specifically for AI data centers in the world.
Hey creators!
Made on YouTube 2024 is here and we’ve announced a lot of updates that aim to give everyone the opportunity to build engaging communities, drive sustainable businesses, and express creativity on our platform.
Below is a roundup with key info – feel free to upvote the announcements that you’re most excited about and subscribe to this post to get updates on these features! We’re looking forward to another year of innovating with our global community it’s a future full of opportunities, and it’s all Made on YouTube!
Today, we’re announcing new agentic capabilities that will accelerate these gains and bring AI-first business process to every organization.
First, the ability to create autonomous agents with Copilot Studio will be in public preview next month.
Second, we’re introducing ten new autonomous agents in Dynamics 365 to build capacity for every sales, service, finance and supply chain team.
10 Daily AI Use Cases for Business Leaders— from flexos.work by Daan van Rossum While AI is becoming more powerful by the day, business leaders still wonder why and where to apply today. I take you through 10 critical use cases where AI should take over your work or partner with you.
Emerging Multi-Modal AI Video Creation Platforms The rise of multi-modal AI platforms has revolutionized content creation, allowing users to research, write, and generate images in one app. Now, a new wave of platforms is extending these capabilities to video creation and editing.
Multi-modal video platforms combine various AI tools for tasks like writing, transcription, text-to-voice conversion, image-to-video generation, and lip-syncing. These platforms leverage open-source models like FLUX and LivePortrait, along with APIs from services such as ElevenLabs, Luma AI, and Gen-3.
Financial Barriers Remain Significant. 58% of respondents note their current financial situation would not allow them to afford college tuition and related expenses. 72% cite affordable tuition or cost of the program as a necessary factor for re-enrollment.
Shifting Perceptions of Degree Value. While 84% of respondents believed they needed a degree to achieve their professional goals before first enrolling, only 34% still hold that belief.
Trust Deficit in Higher Education. Only 42% of respondents agree that colleges and universities are trustworthy, underscoring a trust deficit that institutions must address.
Key Motivators for Re-enrollment. Salary improvement (53%), personal goals (44%), and career change (38%) are the top motivators for potential re-enrollment.
Predicting Readiness to Re-enroll. The top three factors predicting adult learners’ readiness to re-enroll are mental resilience and routine readiness, positive opinions on institutional trustworthiness and communication, and belief in the value of a degree.
Communication Preferences. 86% of respondents prefer email communication when inquiring about programs, with minimal interest in chatbots (6%).
Parents of Gen Alpha and Gen Z students are optimistic about the potential of artificial intelligence (AI) to enhance various aspects of education, according to a new Morning Consult survey commissioned by Samsung Solve for Tomorrow.
The survey notes that an overwhelming 88 percent of parents believe that knowledge of AI will be crucial in their child’s future education and career. However, despite this belief, 81 percent of parents either don’t believe or are not sure that AI is even part of their children’s curriculum. That disparity highlights a pressing need to raise awareness of and increase parental involvement in AI discussions, and advance the implementation of AI in American primary and secondary education.
Norway law decrees: Let childhood be childhood — from hechingerreport.org by Jackie Mader In the Scandinavian country, early childhood education is a national priority, enshrined in law
Ullmann’s conclusion embodies one of Norway’s goals for its citizens: to build a nation of thriving adults by providing childhoods that are joyful, secure and inclusive. Perhaps nowhere is this belief manifested more clearly than in the nation’s approach to early child care. (In Norway, all education for children 5 and under is referred to as “barnehagen,” the local translation of “kindergarten.”) To an American, the Norwegian philosophy, both in policy and in practice, could feel alien. The government’s view isn’t that child care is a place to put children so parents can work, or even to prepare children for the rigors of elementary school. It’s about protecting childhood.
“A really important pillar of Norway’s early ed philosophy is the value of childhood in itself,” said Henrik D. Zachrisson, a professor at the Centre for Research on Equality in Education at the University of Oslo. “Early ed is supposed to be a place where children can be children and have the best childhood possible.”
2 Do not conform to the pattern of this world, but be transformed by the renewing of your mind. Then you will be able to test and approve what God’s will is—his good, pleasing and perfect will.
From DSC: A message for new Christians — be patient and go into your journey with your eyes wide open. This transformation process — of changing how we think and behave — takes years (at least it has for me). But keep praying, reading, and being in fellowship with other believers — don’t stop meeting together. Powerful and lasting change does take place. The “lenses” that you will view the world through change. Just remember that some types of change seem to take a lot longer.
That said, may you be the light in an often dark world.
11 I, even I, am the Lord, and apart from me there is no savior. 12 I have revealed and saved and proclaimed— I, and not some foreign god among you. You are my witnesses,” declares the Lord, “that I am God.