Three children are among those who died in the accident, which happened on the return leg of their journey.
Source link
Category: Uncategorized
-

Bus carrying pilgrims overturns in Brazil killing 15 on board
-

Inside the operation to destroy drug labs in the Colombian jungle
BBC Senior international correspondent Orla Guerin joins specialist police on a mission over Colombia’s cocaine heartland, tasked with destroying crude cocaine labs hidden deep in the jungle.
The Jungle Commandos, a police unit armed by the United States and originally trained by Britain’s SAS, have been looking for the labs which are little more than shacks with drums of chemicals and fresh coca leaves, ready to be turned into a paste.
Major Cristhian Cedano Díaz, a 16-year veteran of the war on drugs, told the BBC a cocaine lab can be rebuild quickly, “in just one day”.
Colombia has been criticised by US President Trump for not doing enough to combat the drugs trade, though Colombia’s President Gustavo Petro claims that his government has seized the largest amount of drugs in history.
Correspondent: Orla Guerin, Camera: Goktay Koraltan Producer: Wietske Burema Edited by: Marina Costa
-

Docker Fixes Critical Ask Gordon AI Flaw Allowing Code Execution via Image Metadata
Cybersecurity researchers have disclosed details of a now-patched security flaw impacting Ask Gordon, an artificial intelligence (AI) assistant built into Docker Desktop and the Docker Command-Line Interface (CLI), that could be exploited to execute code and exfiltrate sensitive data.
The critical vulnerability has been codenamed DockerDash by cybersecurity company Noma Labs. It was addressed by Docker with the release of version 4.50.0 in November 2025.
“In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack: Gordon AI reads and interprets the malicious instruction, forwards it to the MCP [Model Context Protocol] Gateway, which then executes it through MCP tools,” Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News.
“Every stage happens with zero validation, taking advantage of current agents and MCP Gateway architecture.”
Successful exploitation of the vulnerability could result in critical-impact remote code execution for cloud and CLI systems, or high-impact data exfiltration for desktop applications.
The problem, Noma Security said, stems from the fact that the AI assistant treats unverified metadata as executable commands, allowing it to propagate through different layers sans any validation, allowing an attacker to sidestep security boundaries. The result is that a simple AI query opens the door for tool execution.
With MCP acting as a connective tissue between a large language model (LLM) and the local environment, the issue is a failure of contextual trust. The problem has been characterized as a case of Meta-Context Injection.
“MCP Gateway cannot distinguish between informational metadata (like a standard Docker LABEL) and a pre-authorized, runnable internal instruction,” Levi said. “By embedding malicious instructions in these metadata fields, an attacker can hijack the AI’s reasoning process.”
In a hypothetical attack scenario, a threat actor can exploit a critical trust boundary violation in how Ask Gordon parses container metadata. To accomplish this, the attacker crafts a malicious Docker image with embedded instructions in Dockerfile LABEL fields.
While the metadata fields may seem innocuous, they become vectors for injection when processed by Ask Gordon AI. The code execution attack chain is as follows –
- The attacker publishes a Docker image containing weaponized LABEL instructions in the Dockerfile
- When a victim queries Ask Gordon AI about the image, Gordon reads the image metadata, including all LABEL fields, taking advantage of Ask Gordon’s inability to differentiate between legitimate metadata descriptions and embedded malicious instructions
- Ask Gordon to forward the parsed instructions to the MCP gateway, a middleware layer that sits between AI agents and MCP servers.
- MCP Gateway interprets it as a standard request from a trusted source and invokes the specified MCP tools without any additional validation
- MCP tool executes the command with the victim’s Docker privileges, achieving code execution
The data exfiltration vulnerability weaponizes the same prompt injection flaw but takes aim at Ask Gordon’s Docker Desktop implementation to capture sensitive internal data about the victim’s environment using MCP tools by taking advantage of the assistant’s read-only permissions.
The gathered information can include details about installed tools, container details, Docker configuration, mounted directories, and network topology.
It’s worth noting that Ask Gordon version 4.50.0 also resolves a prompt injection vulnerability discovered by Pillar Security that could have allowed attackers to hijack the assistant and exfiltrate sensitive data by tampering with the Docker Hub repository metadata with malicious instructions.
“The DockerDash vulnerability underscores your need to treat AI Supply Chain Risk as a current core threat,” Levi said. “It proves that your trusted input sources can be used to hide malicious payloads that easily manipulate AI’s execution path. Mitigating this new class of attacks requires implementing zero-trust validation on all contextual data provided to the AI model.”


