Flowise RCE Shows AI Tooling Is Now Part of the Attack Surface
A red workflow node shows how an AI orchestration tool can become part of the attack surface.📷 AI-generated / Tech&Space
- ★CVE-2025-59528 affects Flowise 3.0.5 and allows JavaScript execution through CustomMCP configuration handling.
- ★The fix has existed since version 3.0.6, with newer 3.1.x releases available for safer updates.
- ★Active exploitation shows that AI orchestration tools must be treated as production infrastructure.
AN AI WORKFLOW IS NOT JUST A CANVAS
Flowise is a popular open-source tool for visually building AI workflows, chatbots, and agents. Users connect models, tools, document stores, API calls, and logic on a canvas, but the result runs on a server. That is why a critical Flowise vulnerability is not merely a user-interface problem. If an attacker reaches code execution on that system, the affected server may sit next to API keys, internal data, and automation paths.
The vulnerability is tracked as CVE-2025-59528 and is tied to Flowise version 3.0.5. The GitHub security advisory describes the issue in the CustomMCP node: configuration data for connecting to an external MCP server can be processed in a way that executes user-supplied text as JavaScript. In plain terms, a feature meant to read configuration can be forced into running code.
That explains the maximum severity. This is not a cosmetic bug, a bad screen state, or a routine crash. Code running inside a Node.js environment can, depending on deployment settings, reach files, spawn processes, and touch what the server can see. For an AI workflow tool, that matters because these systems often hold model-provider keys, vector-database access, and connections to internal tools.
The good news is that the fix has existed since Flowise 3.0.6. The better operational advice is not to stop at the minimum patched version: the GitHub releases page shows newer 3.1.x releases available. For administrators, the message is short: if a Flowise 3.0.5 instance is still running, treat it as an incident candidate until proven otherwise.
The CustomMCP flaw is patched, but exposed self-hosted instances remain the real risk
Self-hosted AI workflow servers need patching, access controls, secret rotation, and log review.📷 AI-generated / Tech&Space
A PATCH IS NOT FINISHED WHILE INSTANCES STAY PUBLIC
The renewed alarm is not just about an old advisory. BleepingComputer reported that VulnCheck’s Canary network detected active exploitation of CVE-2025-59528. The activity appeared limited at that moment, but researchers warned that roughly 12,000 to 15,000 Flowise instances were exposed online. Nobody knows how many are vulnerable, but attackers do not need all of them to be vulnerable for scanning to pay off.
The simple explanation matters. A patch in a repository does not protect a deployed server. Self-hosted tools do not update themselves. Someone has to know the instance exists, check the version, update it, restart the service, inspect logs, and rotate secrets if compromise is plausible. In AI prototypes, that last step is often skipped because the tool was “just for testing” and then quietly stayed behind a reverse proxy for months.
The Flowise incident should therefore be read as a signal for the broader AI supply chain. AI security discussions often focus on prompt injection, jailbreaks, and model leakage. This case is more grounded: a classic backend flaw in software that connects models to external systems. An attacker does not need to persuade a model to behave badly if they can attack the plumbing that gives the model tools and data.
The practical response does not need drama. Upgrade to a safe version. Remove Flowise from the public internet if public access is not required. Put it behind a VPN, SSO, or strict network controls. Review logs for unusual CustomMCP and node-load method activity. Rotate API keys that were reachable from the instance. The checklist is boring, but that kind of boring work is what keeps an AI tool from becoming an entry point into the rest of the environment.
The conclusion is simple: AI orchestration is now production software. Once a tool has access to models, documents, keys, and actions, it is no longer a toy for fast demos. The Flowise bug is a reminder that the new AI stack still depends on old security basics: patch management, isolation, secrets hygiene, and logs.