Building an AI Chatbot for Discord Using GPT API on a VPS in 2026
AI Discord bots — ones that answer questions, moderate conversationally, and engage with server members using language models — have become standard features in large community servers. Building one is more accessible than most expect. Here's the implementation.
The Architecture
Discord User → Discord Bot (discord.js on VPS)
→ OpenAI API (GPT-4o/Turbo)
← Response
← Discord User sees response
Your VPS bot acts as middleware: it receives Discord messages, sends them to OpenAI, receives the AI response, and sends it back to Discord.
Basic Implementation (Node.js)
const { Client, GatewayIntentBits } = require('discord.js');
const OpenAI = require('openai');
const client = new Client({
intents: [
GatewayIntentBits.Guilds,
GatewayIntentBits.GuildMessages,
GatewayIntentBits.MessageContent
]
});
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
const SYSTEM_PROMPT = `You are a helpful assistant for the "Your Server Name" Discord community.
You are friendly, concise, and helpful. You have knowledge of our server's rules and FAQ.
Rules: 1. Be respectful. 2. No spam. 3. Stay on topic.
FAQ: Our VIP role costs €5/month, purchased at yoursite.com`;
client.on('messageCreate', async (message) => {
if (message.author.bot) return;
if (!message.mentions.has(client.user)) return; // Only respond when mentioned
const userMessage = message.content.replace(/<@!?\d+>/g, '').trim();
try {
const typing = message.channel.sendTyping();
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini', // Cost-effective for Discord bots
messages: [
{ role: 'system', content: SYSTEM_PROMPT },
{ role: 'user', content: userMessage }
],
max_tokens: 500 // Keep responses Discord-length
});
const reply = response.choices[0].message.content;
await message.reply(reply);
} catch (error) {
await message.reply('Sorry, I encountered an error. Please try again.');
console.error(error);
}
});
client.login(process.env.DISCORD_TOKEN);
Cost Management
OpenAI API charges per token. For a busy server:
- gpt-4o-mini: ~€0.0002 per 1000 input tokens (~$0.00015)
- A 500-token conversation costs ~€0.0001
- 1,000 conversations/month: ~€0.10–0.50
Set max_tokens: 500 and implement per-user rate limiting to prevent runaway costs. Optionally use a system to count API costs per user.
Conversation Memory (Context Window)
Store recent messages for context:
const conversationHistory = new Map(); // userId -> messages array
// Add to history on each message
const history = conversationHistory.get(userId) || [];
history.push({ role: 'user', content: userMessage });
if (history.length > 10) history.shift(); // Keep last 10 messages
conversationHistory.set(userId, history);