Brought to you by Nigel Douglas, Head of Developer Relations at Cloudsmith.
The new year has arrived with a surge of critical activity across the software supply chain and infrastructure landscape. High-severity vulnerabilities are dominating the headlines, with over 87,000 MongoDB instances exposed to the "MongoBleed" memory leak and 103,000 n8n automation hubs facing RCE risks. We’ve also seen the first-ever CVE in Linux Kernel Rust code and a perfect 10/10 CVSS score for the React2Shell flaw, emphasising the relentless pressure on modern frameworks. On the defensive front, the industry is pivoting toward transparency and standardisation: Docker open-sourced its Hardened Images, the CNCF launched a Kubernetes AI Conformance program, and Anthropic transitioned the MCP to the Linux Foundation to foster a vendor-neutral future for agentic AI.
MongoDB servers are under active exploitation via CVE-2025-14847, a pre-auth memory leak. With over 87,000 potentially susceptible instances identified across the world, the vulnerability in question (CVSS score: 8.7), allows an unauthenticated attacker to remotely leak sensitive data from the MongoDB server memory, including user information, passwords, and API keys. It has been codenamed MongoBleed.
Linux Kernel Rust Code Sees Its First CVE Vulnerability NVD: CVE-2020-68260
A critical race condition in the rust_binder component of the Linux kernel, specifically within the Node::release function was identified. The vulnerability stems from an unsafe list removal operation where one thread attempts to remove a NodeDeath item while another thread has moved the list to a local stack, leading to a data race on the prev/next pointers and subsequent memory corruption. This flaw, introduced in version 6.18, can trigger kernel paging request failures and system crashes; it has been resolved in versions 6.18.1 and 6.19-rc1 by modifying the release logic to pop items directly from the original list under proper locking.
This critical pre-auth remote code execution (RCE) vulnerability has a maximum CVSS score of 10.0. It affects the React Server Components ecosystem, including Next.js, due to a failure in validating incoming payloads sent via the Flight protocol. By injecting malicious structures that trigger prototype pollution, attackers can execute arbitrary code on both Windows and Linux servers with a single HTTP request. While initially discovered during red team assessments in late 2025, real-world exploitation has since been observed, primarily involving the deployment of cryptocurrency miners.
Dubbed LangGrinch, due to its proximity to the festive season, this critical vulnerability was identified in the LangChain Core Python package, receiving a near-maximum severity score of 9.3. The flaw stems from a serialisation injection issue in the dumps() and dumpd() functions, where the system fails to properly escape specific dictionary keys. If exploited via prompt injection, this could allow attackers to steal sensitive secrets or manipulate LLM responses, posing a significant risk to applications built on the LangChain framework.
103K n8n Automation Instances at Risk From RCE Flaw
A critical RCE vulnerability (CVSS score of 9.9), currently threatens over 103,000 n8n automation instances worldwide. The flaw exists in the platform's expression evaluation system, allowing authenticated attackers to bypass sandboxing and execute arbitrary code with full system privileges. Because n8n often serves as a central hub for sensitive business workflows and credentials, a compromise can lead to a massive "blast radius" across connected cloud services and internal databases.
IN THE NEWS
CI/CD Security
Researchers Spot Modified Shai-Hulud Worm Testing Payload on npm Registry
A new, modified strain of the Shai Hulud malware has been discovered on the npm registry within the package @vietmoney/react-big-calendar, signaling a potential third wave of supply chain attacks. Unlike previous iterations, this version features renamed payloads, updated exfiltration descriptions, and the removal of a "dead man switch" wiper, suggesting a more refined approach by attackers who likely possess the original source code. The malware remains dangerous due to its worm-like ability to hijack developer tokens and self-replicate across other high-traffic packages. Concurrently, a separate threat was identified on Maven Central posing as a legitimate Jackson JSON library to deliver Cobalt Strike beacons, further highlighting a sustained period of high-risk activity targeting package managers.
Catching Malicious Package Releases Using a Transparency Log Source: OpenSSF Blog
Supported by OpenSSF funding, Sigstore is preparing a rekor-monitor for production to help developers protect their software supply chains. By utilising transparency logs like Rekor, which offer append-only and tamper-evident records, maintainers can proactively monitor for unauthorised uses of their identities or compromised release processes. The updated tool includes support for Rekor v2, certificate validation, and a simplified GitHub reusable workflow, ensuring that package maintainers (such as those on PyPI and npm) receive immediate alerts if malicious entries are logged, allowing for a much faster response to security breaches.
Docker is making DHI freely available and open source
Docker Hardened Images (DHI) are now free and open source under the Apache 2.0 license, providing developers with a secure, minimal, and transparent foundation. Built on familiar Alpine and Debian bases to ensure a seamless developer experience, DHI offers verified SBOMs, SLSA Level 3 provenance, and significantly reduced attack surfaces. By expanding this hardened ecosystem to include Helm charts and MCP servers, developers can secure the entire software stack, from the app layer down to the system packages, ensuring transparency and trust.
In 2025, PyPI demonstrated remarkable growth and resilience, serving over 2.5 trillion requests and facilitating the transfer of 1.92 exabytes of data. Throughout the year, the platform prioritised ecosystem security by mandating phishing-resistant 2FA for over half of its active users and expanding "Trusted Publishing" to include GitLab and custom OIDC issuers, which now accounts for 20% of all uploads. Beyond infrastructure, PyPI significantly improved its operational responsiveness, clearing a massive PEP 541 project-name backlog and resolving 92% of malware reports within 24 hours. By combining proactive threat detection, such as typosquatting and domain resurrection prevention, with a commitment to transparent incident reporting, PyPI concludes 2025 as a more secure, efficient, and community-focused cornerstone of the Python landscape.
Kubernetes
Chainguard are keeping Ingress Nginx alive Source: Chainguard
In response to the official retirement and planned archiving of ingress-nginx in March 2026, Chainguard stepped in to provide long-term maintenance through its EmeritOSS program. Because the original project suffered from a lack of maintainers and would otherwise become a security risk due to unpatched vulnerabilities, Chainguard is offering a maintained fork on GitHub to ensure the community has a safe migration path. This initiative is not intended to continue active feature development, but rather to provide stability and CVE patches, buying organisations the necessary time to transition to modern alternatives like the Gateway API or other production-ready controllers without compromising the security of their Kubernetes clusters.
Kubernetes gets an AI Conformance Program Source: InfoQ
The CNCF launched the Certified Kubernetes AI Conformance program to standardise how machine learning workloads run on Kubernetes. By establishing a technical baseline for critical features like GPU resource allocation and gang scheduling, the initiative aims to eliminate infrastructure fragmentation and prevent vendor lock-in. Backed by major players like Microsoft and Google, this certification ensures that AI applications remain portable and predictable across different cloud and on-prem environments, positioning Kubernetes as a unified, interoperable alternative to proprietary AI platforms.
Drew Hagen, the release lead for Kubernetes 1.35, discusses the theme of the release, Timbernetes, which symbolises resilience and diversity in the Kubernetes community. He shares insights from his experience as a release lead, highlights key features and enhancements in the new version, and addresses the importance of coordination in release management. Drew also touches on the deprecations in the release and the future of Kubernetes, including its applications in edge computing.
Helm 4: What’s new in the open source Kubernetes package manager? Source: TheNewStack
In 2025, the Kubernetes package manager Helm celebrated its 10th anniversary with the release of Helm 4, its first major update in six years. Originally conceived as a hackathon project called "Kate’s Place," the tool evolved from a simple prototype into a foundational CNCF-graduated project. The new version addresses years of design debt by introducing modern logging and dependency management, along with WebAssembly-based plugins to ensure cross-platform portability. This milestone reflects Helm’s shift toward maturity, prioritising "boring" but essential features that improve reliability and efficiency for DevOps professionals.
AI, MLLs & MCP
Anthropic donates MCP to the Linux Foundation Source: Anthropic
Model Context Protocol (MCP) has officially been donated to the newly formed Agentic AI Foundation, a directed fund under the Linux Foundation co-founded by Anthropic, Block, and OpenAI. Since its introduction a year ago as an open standard for connecting AI to external systems, MCP has seen massive adoption, boasting over 10,000 public servers and integration into major platforms like Gemini, ChatGPT, and Microsoft Copilot. By moving MCP to a vendor-neutral home alongside other founding projects like Block’s goose and OpenAI’s AGENTS.md, the initiative ensures that the protocol remains a community-driven, open-source standard. This transition, supported by industry giants including Google, AWS, and Microsoft, aims to foster transparent innovation and cross-platform compatibility as the ecosystem for agentic AI continues to scale.
Secure ML workflows with Cloudsmith and SageMaker Source: Cloudsmith
Cloudsmith’s new reference implementation guide for Amazon’s SageMaker AI offers a blueprint for securing AI/ML supply chains by centralising Hugging Face models, Docker images, and Python packages into a private, governed environment. By moving away from unmanaged public sources, teams can eliminate security risks and build instability without disrupting their existing workflows. The integration provides a "single source of truth" that allows SageMaker to securely authenticate, proxy, and cache all necessary artifacts for consistent and protected machine learning development.
AprielGuard: A Guardrail for Safety and Adversarial Robustness in Modern LLM Systems Source: HuggingFace
As LLMs evolve into complex agentic systems, they face a sophisticated threat landscape involving multi-turn jailbreaks, memory poisoning, and tool manipulation. To address the limitations of traditional, single-turn safety filters, AprielGuard is introduced as a unified 8B parameter safeguard model designed for modern agentic workflows. It identifies 16 categories of safety risks and a broad spectrum of adversarial attacks across standalone prompts, multi-turn dialogues, and complex execution traces. By offering both a high-speed production mode and an explainable reasoning mode, AprielGuard provides a scalable, robust defense against the exploitation of long-context reasoning and tool-assisted interactions.
NVIDIA has acquired SchedMD, the primary developer behind the open-source workload manager Slurm, to enhance resource management for large-scale HPC and AI clusters. Despite the acquisition, NVIDIA intends to keep Slurm open-source and vendor-neutral, ensuring it remains accessible across diverse hardware environments. As a critical tool for more than half of the world's top supercomputers, Slurm will continue to play a vital role in optimizing the complex scheduling and scaling required for generative AI and foundation model training.
The Cloud Native Computing Foundation’s flagship conference brings together adopters and technologists from leading open source and cloud native communities in Amsterdam. Be a part of the conversation as CNCF Graduated, Incubating, and Sandbox Projects unite for four days of collaboration, learning, and innovation to drive the future of cloud native computing. We’ll also be running another exciting Capture The Flag (CTF) event.
Signed, sealed, and delivered - see you next issue.