Category: Uncategorized

  • TeamPCP Backdoors LiteLLM Versions 1.82.7–1.82.8 Likely via Trivy CI/CD Compromise

    TeamPCP Backdoors LiteLLM Versions 1.82.7–1.82.8 Likely via Trivy CI/CD Compromise


    TeamPCP, the threat actor behind the recent compromises of Trivy and KICS, has now compromised a popular Python package named litellm, pushing two malicious versions containing a credential harvester, a Kubernetes lateral movement toolkit, and a persistent backdoor.

    Multiple security vendors, including Endor Labs and JFrog, revealed that litellm versions 1.82.7 and 1.82.8 were published on March 24, 2026, likely stemming from the package’s use of Trivy in their CI/CD workflow. Both the backdoored versions have since been removed from PyPI.

    “The payload is a three-stage attack: a credential harvester sweeping SSH keys, cloud credentials, Kubernetes secrets, cryptocurrency wallets, and .env files; a Kubernetes lateral movement toolkit deploying privileged pods to every node; and a persistent systemd backdoor (sysmon.service) polling ‘checkmarx[.]zone/raw’ for additional binaries,” Endor Labs researcher Kiran Raj said.

    Cybersecurity

    As observed in previous cases, the harvested data is exfiltrated as an encrypted archive (“tpcp.tar.gz”) to a command-and-control domain named “models.litellm[.]cloud” via an HTTPS POST request.

    In the case of 1.82.7, the malicious code is embedded in the “litellm/proxy/proxy_server.py” file, with the injection performed during or after the wheel build process. The code is engineered to be executed at module import time, such that any process that imports “litellm.proxy.proxy_server” triggers the payload without requiring any user interaction.

    The next iteration of the package adds a “more aggressive vector” by incorporating a malicious “litellm_init.pth” at the wheel root, causing the logic to be executed automatically on every Python process startup in the environment, not just when litellm is imported.

    Another aspect that makes 1.82.8 more dangerous is the fact that the .pth launcher spawns a child Python process via subprocess.Popen, which allows the payload to be run in the background.

    “Python .pth files placed in site-packages are processed automatically by site.py at interpreter startup,” Endor Labs said. “The file contains a single line that imports a subprocess and launches a detached Python process to decode and execute the same Base64 payload.”

    The payload decodes to an orchestrator that unpacks a credential harvester and a persistence dropper. The harvester also leverages the Kubernetes service account token (if present) to enumerate all nodes in the cluster and deploy a privileged pod to each one of them. The pod then chroots into the host file system and installs the persistence dropper as a systemd user service on every node.

    The systemd service is configured to launch a Python script (“~/.config/sysmon/sysmon.py”) – the same name used in the Trivy compromise – that reaches out to “checkmarx[.]zone/raw” every 50 minutes to fetch a URL pointing to the next-stage payload. If the URL contains youtube[.]com, the script aborts execution – a kill switch pattern common to all the incidents observed so far.

    “This campaign is almost certainly not over,” Endor Labs said. “TeamPCP has demonstrated a consistent pattern: each compromised environment yields credentials that unlock the next target. The pivot from CI/CD (GitHub Actions runners) to production (PyPI packages running in Kubernetes clusters) is a deliberate escalation.”

    With the latest development, TeamPCP has waged a relentless supply chain attack campaign that has spawned five ecosystems, including GitHub Actions, Docker Hub, npm, Open VSX, and PyPI, to expand its targeting footprint and bring more and more systems into its control.

    “TeamPCP is escalating a coordinated campaign targeting security tools and open source developer infrastructure, and is now openly taking credit for multiple follow-on attacks across ecosystems,” Socket said. “This is a sustained operation targeting high-leverage points in the software supply chain.”

    In a message posted on their Telegram channel, TeamPCP said: “These companies were built to protect your supply chains yet they can’t even protect their own, the state of modern security research is a joke, as a result we’re gonna be around for a long time stealing terrabytes [sic] of trade secrets with our new partners.”

    “The snowball effect from this will be massive, we are already partnering with other teams to perpetuate the chaos, many of your favourite security tools and open-source projects will be targeted in the months to come so stay tuned,” the threat actor added.

    Cybersecurity

    Users are advised to perform the following actions to contain the threat –

    • Audit all environments for litellm versions 1.82.7 or 1.82.8, and if found, revert to a clean version
    • Isolate affected hosts
    • Check for the presence of rogue pods in Kubernetes clusters
    • Review network logs for egress traffic to “models.litellm[.]cloud” and “checkmarx[.]zone”
    • Remove the persistence mechanisms
    • Audit CI/CD pipelines for usage of tools like Trivy and KICS during the compromise windows
    • Revoke and rotate all exposed credentials

    “The open source supply chain is collapsing in on itself,” Gal Nagli, head of threat exposure at Google-owned Wiz, said in a post on X. “Trivy gets compromised → LiteLLM gets compromised → credentials from tens of thousands of environments end up in attacker hands → and those credentials lead to the next compromise. We are stuck in a loop.”



    Source link

  • 5 Learnings from the First-Ever Gartner Market Guide for Guardian Agents

    5 Learnings from the First-Ever Gartner Market Guide for Guardian Agents


    On February 25, 2026, Gartner published its inaugural Market Guide for Guardian Agents, marking an important milestone for this emerging category. For those unfamiliar with the various Gartner report types, “a Market Guide defines a market and explains what clients can expect it to do in the short term. With the focus on early, more chaotic markets, a Market Guide does not rate or position vendors within the market, but rather more commonly outlines attributes of representative vendors that are providing offerings in the market to give further insight into the market itself.”

    And if Guardian Agent is an unfamiliar term, Gartner defines it quite simply. “Guardian agents supervise AI agents, helping ensure agent actions align with goals and boundaries.” Enterprise security and identity leaders can request a limited distribution copy of the Gartner Market Guide for Guardian Agents.

    Learning 1: Why Guardian Agent technology is important

    One need only to read the news- in the Wall Street Journal, The Financial Times, Forbes, Bloomberg, the list goes on- to see that AI agents are a thing now. But Team8’s 2025 CISO Village Survey quantified it, finding that:

    • Nearly 70% of enterprises already run AI agents (any system that can answer and act) in production.
    • Another 23% are planning deployments in 2026.
    • Two-thirds are building them in-house. 

    However, in the market guide, Gartner asserts that this fast enterprise adoption is outpacing traditional governance controls. This raises the risk that “as AI agents become more autonomous and embedded in critical workflows, the risks of operational failure and noncompliance escalate.”

    We concur, having read about the recent cloud provider outages stemming from autonomous AI agent actions, which do not surprise us. What we see across early adoption is that, even more so than traditional service accounts, AI agent deployment creates more identity dark matter- the invisible and unmanaged layer of identity. It includes the local credentials authentication that may be offered. The never-expiring tokens that are easily forgotten. Full permission access is granted, regardless of the user or job. And more.

    Not only that, as we shared in our piece on “Lazy LLMs,” AI agents are, by design, shortcut seekers; always looking for the most efficient path to return a satisfactory outcome to each prompt. However, in doing so, they often exploit identity dark matter- orphan, dormant accounts or loose tokens, usually with local clear-text credentials and excessive privileges- that allow them to reach the “end of job,” regardless of whether they should have been allowed to do so. This is how unintended or unimaginable incidents arise.

    As if that weren’t enough business risk, we note that the 2026 CrowdStrike Global Threat Report goes one step further, sharing that “Adversaries are also actively exploiting AI systems themselves, injecting malicious prompts into GenAI tools at more than 90 organizations and abusing AI development platforms.”

    To learn more about how AI agents both expand what we call “Identity Dark Matter” and even exploit it themselves, check out our previous article in The Hacker News.

    Learning 2: Core capabilities of Guardian Agents

    So, having established the need for AI agent supervision, the next question for us becomes how, technically, to address that need. This is where, in our opinion, Gartner is extremely valuable- looking across the market and vendors to understand what is possible and winnowing it down to what’s most valuable, given the problem to be solved.

    The market guide outlines mandatory features in 3 core areas:

    1. AI Visibility and Traceability: Can you see and follow the actions of each AI agent? 
    2. Continuous Assurance and Evaluation: How do you retain confidence that agents remain secure from compromise and compliant in action? 
    3. Runtime Inspection and Enforcement: “ensure that AI agents’ actions and outputs match defined intentions, goals, and governance policies, preventing unintended behaviors.”

    There are 9 detailed features across these core areas detailed in the market guide. Many of these have helped shape many of the 5 principles we believe underpin secure (and productive) use of AI agents.

    1. Pair AI Agents with Human Sponsors: It is our belief that every agent should not only be identified and monitored, but also tied to an accountable human operator. 
    2. Dynamic, Context-Aware Access: We believe AI agents should not hold standing, permanent privileges. Their entitlements should be time-bound, session-aware, and limited to least privilege.
    3. Visibility and Auditability: In our view, visibility isn’t just “we logged it.” You need to tie actions to data reach: what the agent accessed, what it changed, what it exported, and whether that action touched regulated or sensitive datasets. 
    4. Governance at Enterprise Scale: In our minds, AI agent adoption should extend across both new and legacy systems within a single, consistent governance fabric, so that security, compliance, and infrastructure teams are not working in silos. 
    5. Commitment to Good IAM Hygiene: As with all identities, authentication flows, authorization permissions, and implemented controls, strong hygiene- on the application server as well as the MCP server- is critical to keep every user within the proper bounds.

    Learning 3: Different vendor approaches to Guardian AI

    That said, even when vendors try to address the same Guardian Agent requirements, they often solve the problem using very different architectural models.

    Gartner outlines six emerging delivery and integration approaches, which, for adopters, matter more than they may first appear. These are not just packaging choices. They determine where control lives, how much visibility you actually get, how enforceable the policy is, and how much of your agent estate will fall outside coverage.

    Here is our quick take on each model:

    • Standalone Oversight Platforms are typically the easiest place to start. They collect logs, telemetry, and events into one place and can provide meaningful posture visibility, auditability, and analysis. But many of these platforms still lean more toward observation than intervention. That is useful, but it is not the same as control. If your AI risk posture depends on stopping bad actions before they happen, visibility alone will not be enough.
    • AI/MCP Gateways are the most intuitive model: put a control point in the middle and force agent traffic through it. That can create a powerful centralized layer for monitoring and policy enforcement across multiple agents. But it only works if traffic actually goes through that layer. In practice, gateways can become both a bottleneck and a false comfort. If teams bypass them, or if agent interactions happen outside the governed path, visibility breaks down quickly.
    • Embedded or In-Line Run-Time Modules sit closer to execution, inside the agent platform, an AI management platform, or an LLM proxy. That makes them appealing because they are often easier to turn on and can act with more immediacy. The downside is that they are usually platform-bound. They govern the environment they live in, not the broader enterprise. For adopters, that means great local control, but weak enterprise-wide consistency if your agents span multiple stacks.
    • Orchestration Layer Extensions are attractive in environments where orchestration already acts as the operating layer for multi-agent workflows. They can add policy, visibility, and oversight at the workflow level. But they also assume orchestration is where meaningful control should sit. That is only true if the organization actually runs its agents through a common orchestration layer. Many will not. So for adopters, this model is powerful in the right architecture and irrelevant in the wrong one.
    • Hybrid Edge – Cloud Models are where things start to get more realistic. As Gartner notes, these are becoming more important as agent ecosystems become more endpoint-centric. This model spreads oversight between local execution environments and cloud analysis, which can reduce latency and improve runtime relevance. For adopters, the value is clear: it avoids over-centralizing everything in one choke point. But it also raises the complexity bar. Distributed governance is stronger in theory, but harder to implement well. 
    • Coordination Mechanisms standards, APIs, and hooks are less a deployment model than the connective tissue between them. And today, that tissue is immature. Gartner is explicit that integration across AI agent platforms remains difficult because standard interfaces are still lacking. That means adopters should be careful not to mistake “supports standards” for “works seamlessly in production.” The coordination layer is necessary, but it is not yet mature enough to be treated as solved.

    Regardless of technical approach, Gartner gives clear guidance about the need for something more than the governance of individual AI agents built into a single cloud provider, identity tool, or AI platform. Specifically, they call out the following:

    “A neutral, trusted guardian agent layer with multiple guardian agents performing separate but integrated oversight functions enforces routing across all providers. Thus, the guardian agent acts as the missing universal enforcement mechanism.”

    Learning 4: Guardian Agents Will Become an Independent Layer of Enterprise Control

    Perhaps the most important long-term takeaway for us from the Market Guide is that Guardian Agents will not simply be another feature embedded in AI platforms. As we read it, Gartner is quite explicit: “enterprises will require independent guardian agent layers that operate across clouds, platforms, identity systems, and data environments.”

    Why? Because AI agents themselves do not live in one place.

    Agents interact with APIs, applications, data repositories, infrastructure, and even other agents across multiple environments. A cloud provider may be able to supervise agents running inside its own ecosystem, but once those agents call tools, delegate tasks, or operate across providers, no single platform can enforce governance alone.

    That is why we believe Gartner argues that organizations will increasingly deploy enterprise-owned guardian agent layers that sit above individual platforms and supervise agents across the full enterprise environment.

    In other words, governance cannot live only inside the platforms that create or host AI agents. It needs to live above them.

    Put simply: the future of agent governance will not be platform-native supervision. It will be enterprise-owned oversight. And the organizations that adopt that architecture early will be far better positioned to scale agentic AI safely, without introducing a new generation of invisible automation risk across their infrastructure, data, and identities.

    Learning 5: There is Still Time, But Not Forever

    For all of the excitement about AI agents and the big brand news stories about them replacing jobs, the Guardian Agent market is still early. According to Gartner, “Today, guardian agent deployments are mainly prototypes or pilots, although advanced organizations are already using early versions of them to supervise AI agents.” 

    But it’s coming fast. They note that “the guardian agent market — encompassing technologies for the oversight, security, and governance of autonomous AI agents — is entering a phase of accelerated growth, underpinned by the rapid adoption of agentic AI across industries.”

    Frankly, we would make a similar statement about the Agentic market overall. Yes, we have implemented AI agents within Orchid- the company and the product. But organizations, ourselves included, are just scratching the surface of what’s possible. Have individual employees started using their own personal AI agents? Yes. Do many technology vendors offer built-in AI agents, beyond the simple chatbot? Yes. Have some of the earliest adopters implemented a corporate standard platform to augment or replace jobs? Yes (but said with some skeptical hesitation).

    However, as the saying goes, it’s too late to bar the door after the horse is out of the barn. Orchid Security recommends that you ensure AI agent visibility sooner rather than later, and for sure, establish the same identity and access management guardrails and governance required for human users are indeed in place to similarly guide their AI companions, before the horse is out of the barn. 

    The Bottom Line (We Will Say it Again)

    AI agents are here. They are already changing how enterprises operate.

    The challenge is not whether to use them, but how to govern them.

    Safe adoption of AI agents requires applying the same principles that identity practitioners know well, least privilege, lifecycle management, and auditability, to a new class of non-human identities that follow this protocol.

    If identity dark matter is the sum of what we can’t see or control, then unmanaged AI agents may become its fastest-growing source, if left unchecked. The organizations that act now to bring them into the light will be the ones who can move quickly with AI without sacrificing trust, compliance, or security. That’s why Orchid Security is building identity infrastructure to eliminate dark matter, and make Agent AI adoption safe to deploy at enterprise scale.

    Request the limited availability Gartner Market Guide for Guardian Agents to come to your own learnings about AI agents and their guardians.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.





    Source link

  • Munition Remnants Pictured at Site of Deadly Chad Strike Match Weapon Previously Used by Sudan’s RSF

    Munition Remnants Pictured at Site of Deadly Chad Strike Match Weapon Previously Used by Sudan’s RSF


    Munition remnants pictured at the site of a strike that killed at least 17 people in the town of Tiné, Chad, last week appear to match a weapon previously used by Sudan’s Rapid Support Forces (RSF) in the war with Sudanese government forces – despite RSF denials of involvement in the incident.

    Photographs showed what appeared to be a match for the rear control section of a Chinese-made GB25A or GB50A bomb, which can be dropped by Chinese-made drones. Amnesty International previously identified a GB50A used by the RSF that it said had “almost certainly” been re-exported to the group by the UAE. 

    Remnants of a Chinese-made GB25A or GB50A bomb pictured in Tiné, Chad (left). A reference image showing GB25A and GB50A bombs.

    The first photographs of the remnants were posted by Chad’s by ATPE CHAD publication, which reported a public prosecutor had visited the site of impact.

    A separate set of photographs showing even clearer visuals of the remnants was subsequently shared by the N’Djamena-based broadcaster MRTV. It’s Facebook page showed Chadian soldiers standing beside the remnants.

    Images were also posted by posted on Facebook by the Department of Public Safety and Immigration in Chad.

    By using the time displayed on a watch worn by an official in one of the pictures it was possible to estimate that the images were likely taken in the late afternoon. By comparing this with solar data, the shadows visible in the photos and other visual details, it was then possible to infer the approximate layout of nearby buildings and the distribution of trees where the remnants were found.

    Picture featuring Ali Ahmat Aghabache, Chad Minister of Public Security and Immigration. The time on his watch was used as an approximate time of capture of the set of images at the remnant site. Credit: Ministère De La Sécurité Publique Et De L’immigration Facebook.

    With this information, and using satellite imagery, we then geolocated the photos to the northwest of the Bir Tine neighbourhood, just 650 metres from the border with the Western Darfur region of Sudan that is largely controlled by the RSF.

    Top: Using the approximate time of capture, Bellingcat analyzed solar data to reconstruct the site layout and created an approximate panoramic view. Bottom: This reconstruction enabled the identification of building features and tree positions, leading to the precise determination of the site’s coordinates.  (15.043158, 22.818438)

    Remnants from the control sections of other GB25A or GB50A bombs have previously been found after RSF attacks in Sudan, including attacks on Kassala Airport and Coral Marina Hotel in Port Sudan (as seen in the images below).

    Still frames of remnants found in Tiné, Chad (left), Kassala Airport, Sudan (centre) and Coral Marina Hotel Sudan (right).

    BBC News reported that the RSF is suspected of carrying out the attack.

    However, the RSF has denied any involvement and blamed Sudan’s army, the Sudanese Armed Forces (SAF). The SAF has in turn said the RSF was responsible. Chad’s president on Thursday ordered the military to retaliate against future attacks from Sudan. 

    Support Bellingcat

    Your donations directly contribute to our ability to publish groundbreaking investigations and uncover wrongdoing around the world.

    RSF spokesperson Al-Fateh Qurashi told Bellingcat via WhatsApp: “Our forces are not responsible for any targeting of neighboring Chad, and we have no connection to this targeting.” Qurashi instead blamed forces aligned with the Sudanese government over the strike. 

    Imran Abdullah, an adviser to the RSF commander, told Bellingcat via WhatsApp that satellite imagery tracked the drone and that it belonged to the forces aligned with the Sudanese government. However, Abdullah refused to share the imagery he referred to saying: “It can be published if an independent international commission of inquiry is in place.”

    The SAF are not known to use any Chinese-made drones or bombs, like the GB25A or GB50A. The SAF has been observed using Turkish and Iranian made drones and munitions such as the MAM-L bomb. 

    Bellingcat sought comment on the use of these weapons from the Chinese manufacturer, Norinco, as well as the UAE given Amnesty’s previous reports about how a GB50A was used by the RSF after “almost certainly” being re-exported to Sudan. Neither responded prior to publication.

    The conflict has previously spilled over the border into Chad. Reuters reported last month the country closed its border with Sudan after five Chadian soldiers were killed following clashes in Tiné between the RSF and militia fighters loyal to the Sudanese government.


    Ziyu Wan and Riccardo Giannardi contributed from Bellingcat’s volunteer community.

    Bellingcat is a non-profit and the ability to carry out our work is dependent on the kind support of individual donors. If you would like to support our work, you can do so here. You can also subscribe to our Patreon channel here. Subscribe to our Newsletter and follow us on Bluesky here, Instagram here, Reddit here and YouTube here.





    Source link