Over the past few days, the developer community around Discord bots has been shaken by the sudden enforcement action Discord took against BotGhost. The conversation has quickly evolved from "what happened?" to "who’s next?" — especially for services offering white-label or custom-hosted bots.
As someone who's been building and maintaining a large-scale public bot — StartIT with over 400,000 servers and growing — and whose entire premium model is based on offering custom bot hosting with private tokens, I want to share my view on what went wrong, what could’ve gone better, and what this means for everyone else operating in the same space.
My Perspective
Let me start with what I believe: BotGhost is primarily responsible for the situation it found itself in.
From a design standpoint, the platform made critical mistakes:
BotGhost failed to sanitize user input by default, as they relied on the assumption that bot owners would write secure logic. This contradicts its own branding — a no-code tool for non-programmers. If you’re targeting users with no coding knowledge, you can't also expect them to handle input sanitization.
The vulnerability NTTS showcased in his video was not a trivial issue or a one-off mistake. It was a combination of multiple flaws: the ability to inject malicious payloads, a complete lack of containerization or sandboxing, and shared infrastructure where bots with entirely separate tokens shared the same runtime and database. That’s an architectural failure — malicious code from one bot could affect another. This should never be possible. Any experienced developer knows that tenant isolation is a basic security principle.
Worse, their response to the vulnerability was dismissive. Rather than addressing it seriously, they downplayed it — showing a lack of maturity around cybersecurity that likely pushed Discord to act.
Given this, I completely understand why Discord lost trust in BotGhost as a service. Their skepticism is justified.
That said, I don’t believe banning token handling entirely was the right move.
Discord’s current response — requiring BotGhost to stop using user-submitted bot tokens within 30 days — is excessively harsh and rushed, especially for a platform whose entire infrastructure is built around this model.
Instead, I believe Discord should have:
- Clearly communicated the concerns earlier, rather than issuing what amounts to a sudden ultimatum.
- Offered more reasonable remediation steps, such as requesting architectural changes (e.g., runtime isolation) or implementation of security layers like automated token scanners/firewalls to prevent sensitive tokens from leaking.
On Discord’s side, there are also things that should be addressed:
The
PATCH /users/@meendpoint returns the bot token in plain text, even though this isn’t documented. Undocumented fields in APIs are common, but returning secrets like this should never happen — full stop. The token field is used by Discord internally when users change their passwords, but it shouldn't be exposed to bot accounts as it has no real use case. It’s reckless to return sensitive credentials without warning.Discord cited TOS violations, but I don’t agree that BotGhost clearly broke the terms. The Developer Policy allows service providers to manage login credentials, and BotGhost even has “host” in its name. Whether or not you believe they qualify as a “provider,” the line isn’t as clear as Discord implies.
There is no API method to programmatically reset a bot’s token. Even in the case of a confirmed leak or breach, token invalidation requires manual contact with Discord Support. I brought this up in discussion with Discord staff as the endpoint for invalidation already exists but, they refused to give us access to it — telling developers to just go through support.discord.com instead. This is inefficient and contradicts the platform’s supposed emphasis on security.
Do I think other white-label or custom bot hosting platforms are at risk?
Not really — at least not if they’ve taken proper precautions. Unlike BotGhost, most of them haven’t had a breach, and Discord tends to act only after public incidents. That said, the precedent being set here should be a wake-up call to all similar platforms.
Are white-label bots valuable?
Absolutely. In our case, custom-hosted bots serve more than cosmetic purposes (changing bot's name/avatar). They allow:
- Controlled versioning (e.g. premium users get updates earlier).
- Deployment on isolated servers for better performance and testing.
- Custom behaviors tailored to specific clients.
It’s also a matter of branding and perceived ownership — customers enjoy having “their own” bot, and it’s a perfectly valid to use existing all-in-one bots without having to hire a developer.
If I were in Discord’s shoes?
I wouldn’t immediately ban token handling. I would instead:
- Demand BotGhost acknowledge their security failings, and implement a firewall or scanner that blocks any outgoing messages containing potential tokens or sensitive credentials.
- Force platform-level isolation, so a component like a button or message handler from one bot cannot influence or leak into another bot’s environment.
That would strike a much healthier balance between security and innovation — and give BotGhost a chance to evolve rather than collapse.
Trust Is a Two-Way Contract
Ultimately, this situation reflects a deeper issue than just one platform being shut down — it’s about how trust is built, maintained, and in some cases, lost within the developer ecosystem.
Discord has every right to enforce high standards for security, but that responsibility goes both ways. Developers need to take their role seriously, especially when building tools for non-technical users. But platforms like Discord must also provide clear expectations, fair timelines, and transparency in enforcement.
We can’t foster a healthy developer ecosystem if trust is only enforced after a breach. It has to be designed into the systems we build, the policies we write, and the way we communicate — before things go wrong.