# DOKU MCP Server

{% hint style="info" %}
Learn more about DOKU MCP Server use cases [here](https://docs.doku.com/accept-payments/integration-tools/doku-mcp-server?utm_source=developers)
{% endhint %}

This guide provides end-to-end instructions for integrating with the **DOKU MCP Server**, covering system requirements, a step-by-step integration process, and detailed documentation of all available tools. Each tool is described by its purpose, recommended use cases, and relevance to common payment flows.

## Requirements

Before you start, make sure you fulfill the following requirements:

1. Become a DOKU Merchant (follow the guide [here](https://docs.doku.com/get-started/create-account?utm_source=developers))
2. Prepare an AI assistant (e.g. Visual Studio Code, Claude, or any AI assistant of your choice)

***

## Integration Guide

The steps below explain how to connect your AI assistant or agent environment to the DOKU MCP Server.&#x20;

### Step 1: Generate API Keys

Please contact our team to retrieve your credentials by filling the form [here](https://www.doku.com/en-us/contact-sales).&#x20;

Be sure to select "**Payment Integration via MCP Server (AI Agents)**" in the list of services you are interested in using.

### Step 2: Encode API Keys

After receiving your API key, you must convert it to Base64 format for use in the MCP `Authorization` header. Run the following command:

```
base64 <<< api_key_xxxxxxxxxx:
```

{% hint style="info" %}
**Important Note:**&#x20;

1. Do not remove the `:` separator.&#x20;
2. Save the encoded string as you’ll use it in the next step.
   {% endhint %}

### Step 3: Configure Your AI Assistant

Once you have your Base64-encoded token, configure your AI assistant or agent framework to connect to the DOKU MCP Server.

You can integrate using any of the following supported environments:

1. Visual Studio Code (MCP Extension)
2. Claude Code / Claude Desktop
3. n8n
4. Python Library (MCP Client SDK)
   * LangChain
   * StrandsAgent
5. Javascript Library (MCP Client SDK)
   * LangChain

Each platform requires you to insert your Base64-encoded credentials and specify the MCP Server endpoint. Refer to the individual setup guides for platform-specific configuration steps.

<details>

<summary><strong>Visual Studio Code</strong></summary>

<figure><img src="https://3092822868-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FqCxtvLoJNNxvp4U7kLHd%2Fuploads%2Fi0DsA3ygblnKdjOetMO2%2Fimage.png?alt=media&#x26;token=98cd6c41-1090-4851-a66b-9aeccc542af7" alt=""><figcaption></figcaption></figure>

1. Visual Studio Code
2. Open **Command Palette** and type **MCP: Open User Configuration**
3. Paste the following configuration:

```visual-basic
{
    "servers": {
        "doku-mcp-server": {
            "type": "http",
            "url": "https://api-sandbox.doku.com/doku-mcp-server/mcp",
            "headers": {
                "Client-Id": "{doku-client-id}",
                "Authorization": "Basic {base64.encode(doku_api_key:)}"
            }
        }
    }
}
```

4. Replace `{doku-client-id}` and `{base64-encoded-token}` with your values.
5. Verify by opening Command Palette → **MCP: Show Installed Servers** → select **doku-mcp-server**. If successful, you’ll see logs confirming the connection and available tools.
6. Restart your AI assistant app. In the chat window, switch to **Agent Mode**, then ask:

> *Show me the available DOKU tools*

If connected, the server will list all MCP tools (e.g., checkout, payment link, QRIS). You can then try calling a tool such as `checkout` to confirm everything works.

</details>

<details>

<summary><strong>Claude Code / Claude Desktop</strong></summary>

Claude Code supports two integration methods:

1. Claude CLI
   * Recommended for direct MCP connections
2. Claude Desktop App
   * Requires a proxy because custom headers are not yet supported

***

#### Claude CLI

Claude CLI supports MCP servers over HTTP and allows sending custom headers, making it the simplest way to connect to DOKU MCP Server.

Run the following command in your terminal:

```bash
claude mcp add \
  --transport http \
  doku-mcp-server https://api-sandbox.doku.com/doku-mcp-server/mcp" \
  --header "Client-Id: Your Brand ID" \
  --header "Authorization: Your Encoded API Keys"
```

This command does the following actions:

* Registers a new MCP server named **doku-mcp-server**
* Connects via **HTTP transport**
* Sends the required authentication headers:
  * `Client-Id`
  * `Authorization: Basic <base64-encoded-api-key:>`

After running the command, Claude CLI will automatically load and list the available DOKU MCP tools.

***

#### Claude Desktop

Claude Desktop App currently does not support custom HTTP headers for MCP connections. Since the DOKU MCP Server requires headers for authentication, you must use a small Node.js proxy that:

* Connects to DOKU MCP Server over HTTP
* Injects the required headers
* Exposes the MCP server to Claude Desktop using **STDIO protocol**

**Step 1: Clone DOKU MCP Proxy**

Clone the proxy package from the following repository:

> **GitHub**: <https://github.com/PTNUSASATUINTIARTHA-DOKU/doku-mcp-proxy>

After downloading, unzip the folder to any directory of your choice.

***

**Step 2: Configure the Proxy**

Open the file:\
`doku-mcp-proxy/index.js`

Update the configuration block with your credentials:

```js
const DOKU_CONFIG = {
  url: 'https://api-sandbox.doku.com/doku-mcp-server/mcp"',
  headers: {
    'Client-Id': 'Your Brand ID',
    'Authorization': 'Your Encoded API Keys',
    'Content-Type': 'application/json',
    'Accept': 'application/json, text/event-stream'
  }
};
```

Make sure your `Authorization` value is the **Base64-encoded `<api-key>:` string**.

***

**Step 3: Install Dependencies**

Go to the proxy folder and run:

```bash
npm install
```

This installs the MCP proxy runtime dependencies.

***

**Step 4: Configure Claude Desktop**

1. Open **Claude Desktop**
2. Go to **Settings → Developer → Edit Config**
3. This opens the file: `claude_desktop_config.json`

Replace or add the following configuration:

```json
{
  "mcpServers": {
    "doku-mcp-local": {
      "command": "/your-node-path/bin/node",
      "args": [
        "/your-path-to/doku-mcp-proxy/index.js"
      ]
    }
  }
}
```

<figure><img src="https://t9018384872.p.clickup-attachments.com/t9018384872/e951fd0f-d42f-424a-b71d-81582298e6e6/Screenshot%202025-12-04%20at%203.13.16%E2%80%AFPM.png" alt=""><figcaption></figcaption></figure>

Notes:

* Replace `"/your-node-path/bin/node"` with your actual Node.js binary path\
  (For macOS with nvm it's something like `/Users/<user>/.nvm/versions/node/v18.x.x/bin/node`)
* Replace the project path with where you extracted the proxy

***

**Step 5: Restart Claude Desktop**

After restarting:

* Go to **Settings → Developer → MCP Servers**
* You should now see **doku-mcp-local** listed
* Claude Desktop will automatically load DOKU MCP tools

If everything is configured properly, you can now call DOKU MCP tools directly inside Claude Desktop.

</details>

<details>

<summary><strong>n8n</strong></summary>

<figure><img src="https://3092822868-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FqCxtvLoJNNxvp4U7kLHd%2Fuploads%2FDE9IC90hN2L58XU4KIWE%2Fimage.png?alt=media&#x26;token=ecc66186-3009-4421-ba00-abf4d06fde59" alt=""><figcaption></figcaption></figure>

With n8n, you can integrate the DOKU MCP Server and expose your AI-powered agent through:

1. Web Browser / API (via n8n workflow trigger)
2. WhatsApp (using WhatsApp Business Cloud and n8n)

Both approaches allow your AI agent to call DOKU MCP tools directly from n8n workflows.

***

#### Web Browser / API

1. Go to **Settings** → **Community Nodes** (requires Admin role)
2. Click **Install**
3. Enter `n8n-nodes-doku-mcp-client` in the **NPM Package Name** field
4. Agree to the terms and conditions, and then click **Install**
5. Check that `n8n-nodes-doku-mcp-client` has been successfully installed

<figure><img src="https://3092822868-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FqCxtvLoJNNxvp4U7kLHd%2Fuploads%2FTvHHxlFUfXeiygrT620a%2Fimage.png?alt=media&#x26;token=e1383f78-8cdb-4a6b-afaf-0f885f53aea6" alt=""><figcaption></figcaption></figure>

6. Create your workflow using a **Chat Trigger**, **AI Agent**, and an **LLM** already integrated with n8n<br>

<figure><img src="https://3092822868-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FqCxtvLoJNNxvp4U7kLHd%2Fuploads%2FPhTJ8S3iNGdXgEVHeJ7u%2Fimage.png?alt=media&#x26;token=4d706b4e-31db-43b0-b1b3-fa971ac9854e" alt=""><figcaption></figcaption></figure>

7. Open the **Tools** panel and select **DOKU MCP Client Tool**
8. Fill in all the required form fields
   1. Endpoint
      * Production: <https://mcp.doku.com/mcp>
      * Sandbox: <https://api-sandbox.doku.com/doku-mcp-server/mcp>
   2. Server Transport: HTTP Streamable
   3. Client ID: Your DOKU Client ID (BRN-xxxxx)
   4. API Key: Your DOKU API Key (api\_key\_xxxx)
   5. Tools to Include: Default is All, or you may specify only the DOKU MCP tools the agent should use

Once saved, your n8n workflow is fully connected to the **DOKU MCP Server**.

***

#### WhatsApp

**Step 1: Configure the WhatsApp Trigger Node**

1. Add a **WhatsApp Business Cloud Trigger** node
   * Search “WhatsApp” in the node list
2. Set the **Trigger Event** to **On messages**
3. Create the required credentials (first-time setup):
   * **Facebook App Client ID**
   * **Facebook App Client Secret**\
     Follow the steps in the official n8n documentation:\
     <https://docs.n8n.io/integrations/builtin/credentials/whatsapp/>

This node will fire whenever a new WhatsApp message is received.

***

**Step 2: Configure the AI Agent Node**

1. Add an **AI Agent** node and connect it after the WhatsApp Trigger
2. Set the prompt source to use the incoming message:
   * Change **Prompt (User Message) → Source** to **Expression**
   * Set the value to:

     ```
     {{ $json.messages[0].text.body }}
     ```

This extracts the user’s message from the WhatsApp webhook payload.

***

**Step 3: Add a Simple Memory Node**

This node allows the AI agent to remember previous messages in the same conversation.

1. Add a **Simple Memory** node
2. Connect it between the WhatsApp Trigger and AI Agent (or wherever memory is needed)
3. Set **Session ID** to uniquely identify each WhatsApp user:

   ```
   {{ $json.contacts[0].wa_id }}
   ```
4. (Optional) Adjust **Context Window Length**
   * Default: 5
   * Controls how many previous turns the AI agent will remember

***

**Step 4: Configure the WhatsApp Send Message Node**

This node sends the AI-generated reply back to the user

1. Add a **WhatsApp → Send Message** node
2. Create or select the required credentials:
   * **Access Token**
   * **Business Account ID**\
     Documentation:\
     <https://docs.n8n.io/integrations/builtin/credentials/whatsapp/>
3. Configure the message:

| Setting                    | Value                                                                                        |
| -------------------------- | -------------------------------------------------------------------------------------------- |
| **Resource**               | Message                                                                                      |
| **Operation**              | Send                                                                                         |
| **Recipient Phone Number** | <p>Expression:<br><code>{{ $('WhatsApp Trigger').item.json.contacts\[0].wa\_id }}</code></p> |
| **Text Body**              | <p>Expression:<br><code>{{ $json.output }}</code> (content generated by the AI Agent)</p>    |

</details>

<details>

<summary><strong>Python Library</strong></summary>

DOKU MCP Server can be integrated with Python using:

1. LangChain
2. StrandsAgent

Both approaches allow you to load DOKU MCP tools dynamically and let your LLM call them during reasoning.

***

#### Langchain

**Step 1: Create `requirements.txt`**

```txt
# FastAPI and server
fastapi[standard]==0.119.1
uvicorn[standard]==0.38.0

# OpenAI and LangChain
openai>=1.54.0,<2.0.0
langchain==0.3.9
langchain-openai==0.2.8
langchain-community==0.3.9
langchain-core>=0.3.21

# MCP Client SDK
mcp>=1.1.0
httpx>=0.27
httpx-sse>=0.4

# Environment variables
python-dotenv==1.0.1

# Pydantic (FastAPI dependency)
pydantic==2.12.3
```

***

**2. Create `.env`**

```bash
# OpenAI API Key
OPENAI_API_KEY=your-openai-api-key-here

# DOKU MCP Server Configuration
DOKU_MCP_URL=https://api-uat.doku.com/doku-mcp-server/mcp
DOKU_CLIENT_ID=your-client-id-here
DOKU_AUTHORIZATION=Basic your-base64-encoded-api-key:
```

Notes:\
`DOKU_AUTHORIZATION` must include the trailing colon (`:`) before Base64 encoding.

***

**Step 3: Create `main.py`**

```python
import os
from typing import Union
from contextlib import asynccontextmanager

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.tools import tool as langchain_tool
from httpx import AsyncClient

load_dotenv()

# Global MCP tools
mcp_tools = []

# DOKU MCP configuration
DOKU_URL = os.getenv("DOKU_MCP_URL")
DOKU_HEADERS = {
    "client-id": os.getenv("DOKU_CLIENT_ID"),
    "authorization": os.getenv("DOKU_AUTHORIZATION"),
    "content-type": "application/json",
    "accept": "application/json, text/event-stream"
}
PROTOCOL_VERSION = None


async def call_mcp_rpc(method: str, params: dict = None, request_id: int = 1) -> dict:
    """Call Doku MCP server via JSON-RPC."""
    headers = DOKU_HEADERS.copy()
    if PROTOCOL_VERSION:
        headers["mcp-protocol-version"] = PROTOCOL_VERSION

    payload = {
        "jsonrpc": "2.0",
        "id": request_id,
        "method": method,
        "params": params or {}
    }

    async with AsyncClient(timeout=30.0) as client:
        response = await client.post(DOKU_URL, headers=headers, json=payload)
        response.raise_for_status()
        return response.json()


@asynccontextmanager
async def lifespan(app: FastAPI):
    """Initialize MCP tools at startup."""
    global mcp_tools, PROTOCOL_VERSION
    import json

    print("=" * 60)
    print("Connecting to DOKU MCP Server…")
    print(f"URL: {DOKU_URL}")
    print("=" * 60)

    try:
        init_result = await call_mcp_rpc("initialize", {}, 0)
        PROTOCOL_VERSION = init_result["result"]["protocolVersion"]
        server_info = init_result["result"]["serverInfo"]

        print(f"✓ Connected: {server_info['name']} v{server_info['version']}")
        print(f"Protocol: {PROTOCOL_VERSION}")

        tools_result = await call_mcp_rpc("tools/list", {}, 1)
        tools_list = tools_result["result"]["tools"]

        print(f"\n✓ Loaded {len(tools_list)} tools:")

        for mcp_tool in tools_list:
            name = mcp_tool["name"]
            desc = mcp_tool.get("description", "")

            print(f"  - {name}")

            def make_tool(name: str, description: str):
                @langchain_tool(name)
                async def call_tool(tool_request: str) -> str:
                    result = await call_mcp_rpc(
                        "tools/call",
                        {"name": name, "arguments": {"toolRequest": tool_request}},
                        2
                    )
                    return json.dumps(result.get("result", {}), indent=2)

                call_tool.description = description
                return call_tool

            mcp_tools.append(make_tool(name, desc))

        print("=" * 60)

    except Exception as e:
        print(f"Error: {e}")

    yield


app = FastAPI(lifespan=lifespan)


def get_llm():
    api_key = os.getenv("OPENAI_API_KEY")
    if not api_key:
        raise ValueError("Missing OPENAI_API_KEY")
    return ChatOpenAI(api_key=api_key, model="gpt-3.5-turbo")


class ChatRequest(BaseModel):
    message: str
    model: str = "gpt-3.5-turbo"
    max_tokens: int = 150


class ChatResponse(BaseModel):
    response: str
    model: str
    usage: dict


@app.post("/chat", response_model=ChatResponse)
async def chat(request: ChatRequest):
    try:
        llm = get_llm()

        if mcp_tools:
            prompt = ChatPromptTemplate.from_messages([
                ("system", "You are a helpful assistant with access to DOKU MCP tools."),
                ("human", "{input}"),
                ("placeholder", "{agent_scratchpad}")
            ])

            agent = create_tool_calling_agent(llm, mcp_tools, prompt)
            executor = AgentExecutor(agent=agent, tools=mcp_tools, verbose=True)

            result = await executor.ainvoke({"input": request.message})
            output = result.get("output", "No response")
        else:
            result = await llm.ainvoke(request.message)
            output = result.content

        return ChatResponse(
            response=output,
            model=request.model,
            usage={"prompt_tokens": 0, "completion_tokens": 0, "total_tokens": 0}
        )

    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))


if __name__ == "__main__":
    import uvicorn
    uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True)
```

***

**Step 4: Run Application**

1. Install Python 3.11

```bash
brew install python@3.11
```

2. Create Virtual Environment

```bash
python3.11 -m venv .venv
source .venv/bin/activate
```

3. Install Dependencies

```bash
pip install -r requirements.txt
```

4. Run FastAPI App

```bash
python main.py
```

***

**Step 5: Test Using curl**

```bash
curl -L 'localhost:8000/chat' \
  -H 'Content-Type: application/json' \
  -d '{"message": "create checkout with amount 20000"}'
```

***

#### StrandsAgent

**Step 1: Create `requirements.txt`**

```txt
# FastAPI and server
fastapi[standard]==0.119.1
uvicorn[standard]==0.38.0

# OpenAI and Strands
openai>=1.54.0,<2.0.0
strands-agents>=1.1.0

# MCP Client SDK
mcp>=1.1.0
httpx>=0.27

# Environment variables
python-dotenv==1.0.1

# Pydantic
pydantic==2.12.3
```

***

**Step 2: Create `.env`**

```bash
OPENAI_API_KEY=your-openai-api-key-here
DOKU_MCP_URL=https://api-uat.doku.com/doku-mcp-server/mcp
DOKU_CLIENT_ID=your-client-id-here
DOKU_AUTHORIZATION=Basic your-base64-encoded-api-key:
```

***

**Step 3: Create `main.py`**

```python
import os
import json
from contextlib import asynccontextmanager

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from dotenv import load_dotenv

from strands import Agent
from strands.models.openai import OpenAIModel
from strands.tools.mcp import MCPClient
from mcp.client.streamable_http import streamablehttp_client

load_dotenv()

mcp_client = None
agent = None

DOKU_URL = os.getenv("DOKU_MCP_URL")
DOKU_HEADERS = {
    "client-id": os.getenv("DOKU_CLIENT_ID"),
    "authorization": os.getenv("DOKU_AUTHORIZATION"),
    "content-type": "application/json",
    "accept": "application/json, text/event-stream"
}


@asynccontextmanager
async def lifespan(app: FastAPI):
    global mcp_client, agent

    print("=" * 60)
    print("Connecting to DOKU MCP Server (Strands)…")
    print("=" * 60)

    try:
        mcp_client = MCPClient(
            lambda: streamablehttp_client(
                url=DOKU_URL,
                headers=DOKU_HEADERS
            )
        )

        with mcp_client:
            tools = mcp_client.list_tools_sync()

            print(f"✓ Connected. Loaded {len(tools)} tools:")

            for t in tools:
                print(f"  - {t.tool_name}: {t.tool_spec.get('description', '')}")

            model = get_openai_model()
            agent = Agent(
                model=model,
                tools=tools,
                system_prompt="You are a helpful assistant with access to DOKU MCP tools."
            )

    except Exception as e:
        print(f"Error: {e}")

    yield

    mcp_client = None
    agent = None


app = FastAPI(lifespan=lifespan)


def get_openai_model():
    api_key = os.getenv("OPENAI_API_KEY")
    if not api_key:
        raise ValueError("OPENAI_API_KEY is not set")

    return OpenAIModel(
        model_id="gpt-3.5-turbo",
        client_args={"api_key": api_key}
    )


class ChatRequest(BaseModel):
    message: str


class ChatResponse(BaseModel):
    response: str


@app.post("/chat", response_model=ChatResponse)
async def chat(request: ChatRequest):
    global agent, mcp_client

    if not agent or not mcp_client:
        raise HTTPException(500, "Agent not initialized")

    try:
        with mcp_client:
            tools = mcp_client.list_tools_sync()
            session_agent = Agent(
                model=get_openai_model(),
                tools=tools,
                system_prompt="You are a helpful assistant with access to DOKU MCP tools."
            )

            result = await session_agent.invoke_async(request.message)

        return ChatResponse(response=str(result))

    except Exception as e:
        raise HTTPException(500, str(e))


if __name__ == "__main__":
    import uvicorn
    uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True)
```

***

**Step 4: Run Application**

```bash
brew install python@3.11
python3.11 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python main.py
```

***

**Step 5: Test with curl**

```bash
curl -L 'localhost:8000/chat' \
  -H 'Content-Type: application/json' \
  -d '{"message":"create checkout with amount 20000"}'
```

</details>

<details>

<summary><strong>Javascript Library</strong></summary>

DOKU MCP Server can be integrated with Javascript (JS) using LangChain.

***

#### Langchain

**Step 1: Create  `package.json`** &#x20;

```json
{
  "name": "demo-mcp-client-langchain-js",
  "version": "1.0.0",
  "description": "LangChain JS demo with DOKU MCP server integration",
  "main": "server.js",
  "type": "module",
  "scripts": {
    "start": "node server.js",
    "dev": "node --watch server.js"
  },
  "dependencies": {
    "express": "^4.18.2",
    "dotenv": "^16.3.1",
    "axios": "^1.6.0",
    "langchain": "^0.2.0",
    "@langchain/openai": "^0.2.0",
    "@langchain/core": "^0.2.0"
  },
  "engines": {
    "node": ">=18.0.0"
  },
  "license": "MIT"
}

```

This project requires **Node.js 18+** because LangChain uses native `fetch` and modern ESM.

***

**Step 2: Create  `.env`**

```
# Environment Variables
OPENAI_API_KEY=your_openai_api_key_here
DOKU_MCP_URL=https://api-sandbox.doku.com/doku-mcp-server/mcp
DOKU_CLIENT_ID=your_doku_client_id
DOKU_AUTHORIZATION=Basic your-base64-encoded-api-key
PORT=3000
```

Notes:

* `DOKU_AUTHORIZATION` **must be** the Base64-encoded `<api-key>:` value\
  (including the trailing colon before encoding).
* Never commit `.env` to source control.

***

**Step 3: Create  `server.js`**

```js
import express from 'express';
import dotenv from 'dotenv';
import axios from 'axios';
import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { DynamicTool } from '@langchain/core/tools';

dotenv.config();

const app = express();
app.use(express.json());

// Global MCP tools
let mcpTools = [];

// MCP configuration
const DOKU_URL = process.env.DOKU_MCP_URL;
const DOKU_HEADERS = {
  'client-id': process.env.DOKU_CLIENT_ID,
  'authorization': process.env.DOKU_AUTHORIZATION,
  'content-type': 'application/json',
  'accept': 'application/json, text/event-stream'
};

let PROTOCOL_VERSION = null;

/**
 * Send a JSON-RPC request to DOKU MCP Server
 */
async function callMcpRpc(method, params = {}, requestId = 1) {
  const headers = { ...DOKU_HEADERS };
  if (PROTOCOL_VERSION) {
    headers['mcp-protocol-version'] = PROTOCOL_VERSION;
  }

  const payload = {
    jsonrpc: '2.0',
    id: requestId,
    method,
    params
  };

  try {
    const response = await axios.post(DOKU_URL, payload, {
      headers,
      timeout: 30000
    });
    return response.data;
  } catch (error) {
    throw new Error(`MCP RPC call failed: ${error.message}`);
  }
}

/**
 * Initialize MCP connection and load tool definitions
 */
async function initializeMcp() {
  console.log('='.repeat(60));
  console.log('Connecting to DOKU MCP Server...');
  console.log(`URL: ${DOKU_URL}`);
  console.log('='.repeat(60));

  try {
    // Initialize session
    const initResult = await callMcpRpc('initialize', {}, 0);

    PROTOCOL_VERSION = initResult.result.protocolVersion;
    const serverInfo = initResult.result.serverInfo;

    console.log(`✓ Connected: ${serverInfo.name} v${serverInfo.version}`);
    console.log(`  Protocol Version: ${PROTOCOL_VERSION}`);

    // Retrieve available tools
    const toolsResult = await callMcpRpc('tools/list', {}, 1);
    const toolsList = toolsResult.result.tools;

    console.log(`\n✓ Found ${toolsList.length} tools:`);

    // Convert MCP tool definitions into LangChain tools
    mcpTools = toolsList.map((mcpTool) => {
      console.log(`  - ${mcpTool.name}`);

      return new DynamicTool({
        name: mcpTool.name,
        description: mcpTool.description || '',
        func: async (toolRequest) => {
          try {
            const result = await callMcpRpc(
              'tools/call',
              { name: mcpTool.name, arguments: { toolRequest } },
              2
            );
            return JSON.stringify(result.result || {}, null, 2);
          } catch (err) {
            return `Error: ${err.message}`;
          }
        }
      });
    });

    console.log(`\n✓ Loaded ${mcpTools.length} MCP tools successfully`);
    console.log('='.repeat(60));

  } catch (error) {
    console.error(`Error initializing MCP: ${error.message}`);
  }
}

/**
 * Create OpenAI LLM client
 */
function getOpenAiClient(model = 'gpt-3.5-turbo') {
  const apiKey = process.env.OPENAI_API_KEY;
  if (!apiKey) {
    throw new Error('OPENAI_API_KEY environment variable is not set');
  }

  return new ChatOpenAI({
    openAIApiKey: apiKey,
    modelName: model,
    temperature: 0.7
  });
}

/**
 * Chat endpoint
 */
app.post('/chat', async (req, res) => {
  try {
    const { message, model = 'gpt-3.5-turbo' } = req.body;

    if (!message) {
      return res.status(400).json({ error: 'Message is required' });
    }

    const llm = getOpenAiClient(model);
    let responseText;

    // Use agent with MCP tools if available
    if (mcpTools.length > 0) {
      const prompt = ChatPromptTemplate.fromMessages([
        ['system', 'You are a helpful assistant with access to DOKU MCP tools. Use them when appropriate.'],
        ['human', '{input}'],
        ['placeholder', '{agent_scratchpad}']
      ]);

      const agent = await createOpenAIFunctionsAgent({
        llm,
        tools: mcpTools,
        prompt
      });

      const executor = new AgentExecutor({
        agent,
        tools: mcpTools,
        verbose: true
      });

      const result = await executor.invoke({ input: message });
      responseText = result.output ?? 'No response';
    } else {
      // Fallback: LLM only
      const result = await llm.invoke(message);
      responseText = result.content;
    }

    res.json({
      response: responseText,
      model
    });

  } catch (error) {
    res.status(500).json({ error: error.message });
  }
});

/**
 * Start the server
 */
const PORT = process.env.PORT || 3001;
initializeMcp().then(() => {
  app.listen(PORT, () => {
    console.log(`Server running on port ${PORT}`);
  });
});

```

</details>

***

## Tools

DOKU MCP server comes equipped with over 30 powerful tools, designed to handle every stage of the payment process:

### Checkout Payment

<table><thead><tr><th width="100">No.</th><th>Tool Name</th><th>Description</th><th>Use Case</th></tr></thead><tbody><tr><td>1</td><td>create_payment_link</td><td>Generate a payment link that can be used to accept payments without determining customer data</td><td>Customer inputting data (e.g. name, email, etc.) before proceeding to check out and selecting payment methods</td></tr><tr><td>2</td><td>create_checkout_link</td><td>Generate a checkout link that can be used to accept payments with customer data specified</td><td>Customer selecting payment methods and checking out immediately</td></tr></tbody></table>

#### Examples

1. **Payment Link**

<figure><img src="https://3092822868-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FqCxtvLoJNNxvp4U7kLHd%2Fuploads%2FJBHKwjpI4xMPzXc9ReJS%2Fimage.png?alt=media&#x26;token=c0ed1fa0-ea27-416f-83f9-a72e7e23695a" alt=""><figcaption><p>Tool: create_payment_link</p></figcaption></figure>

2. **Checkout**

<figure><img src="https://3092822868-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FqCxtvLoJNNxvp4U7kLHd%2Fuploads%2Fu2locbPToRBGs45b0Mxu%2Fimage.png?alt=media&#x26;token=63ccaf2c-59c1-49d8-aaa0-72bca3c45de5" alt=""><figcaption><p>Tool: create_checkout_link</p></figcaption></figure>

### Direct Payment

<table><thead><tr><th width="100">No.</th><th>Tool Name</th><th>Description</th><th>Use Case</th></tr></thead><tbody><tr><td>3</td><td>get_merchant_payment_methods</td><td>Retrieve a list of all payment methods activated for your merchant account</td><td>Checking which banks, cards, or wallets are active</td></tr><tr><td>4</td><td>generate_payment_virtual_account</td><td>Generate a Virtual Account number for bank transfer</td><td>Completing payments made via VA BCA</td></tr><tr><td>5</td><td>update_payment_virtual_acccount</td><td>Modify details of an existing Virtual Account</td><td>Extending deadline for an unpaid VA</td></tr><tr><td>6</td><td>delete_payment_virtual_account</td><td>Close or disable payment of an existing VA</td><td>Cancelling an unused VA number</td></tr><tr><td>7</td><td>generate_payment_qris</td><td>Generate a QRIS code for direct payments</td><td>Completing payments made via QRIS</td></tr><tr><td>8</td><td>generate_payment_card_auth</td><td>Perform 3D Secure (3DS) authentication for credit/debit cards requiring additional verification</td><td>When the card issuer mandates 3DS before charging</td></tr><tr><td>9</td><td>generate_payment_card_capture</td><td>Capture a previously authorized card transaction to complete the payment</td><td>For 2-step card flows where authorization and capture are separate</td></tr><tr><td>10</td><td>generate_payment_card_charge</td><td>Charge a card transaction after successful 3DS authentication</td><td>Complete payment after <code>card_auth</code> returns a valid 3DS ID</td></tr><tr><td>11</td><td>generate_payment_ovo_auth</td><td>Authenticate an OVO account before payment</td><td>Required step before processing an OVO transaction</td></tr><tr><td>12</td><td>generate_payment_ovo</td><td>Generate an OVO e-Wallet payment using the authCode from OVO e-Wallet authentication</td><td>Charge customers who choose OVO as a payment method</td></tr><tr><td>13</td><td>generate_payment_doku_ewallet_auth</td><td>Authenticate or bind a DOKU e-Wallet account before payment</td><td>Registering a DOKU e-Wallet user for future transactions</td></tr><tr><td>14</td><td>generate_payment_doku_ewallet</td><td>Charge a DOKU e-Wallet account after successful authentication</td><td>Completing payments made via DOKU e-Wallet</td></tr><tr><td>15</td><td>generate_payment_dana</td><td>Generate a DANA e-Wallet payment</td><td>Completing payments made via DANA</td></tr><tr><td>16</td><td>generate_payment_shopeepay</td><td>Generate a ShopeePay e-Wallet payment</td><td>Completing payments made via ShopeePay</td></tr><tr><td>17</td><td>generate_payment_akulaku</td><td>Generate an Akulaku PayLater or installment transaction</td><td>Completing payments made via Akulaku</td></tr><tr><td>18</td><td>generate_payment_kredivo</td><td>Generate a Kredivo PayLater or installment transaction</td><td>Completing payments made via Kredivo</td></tr><tr><td>19</td><td>generate_payment_alfagroup</td><td>Generate a payment code for cash payments at Alfamart/Alfamidi outlets</td><td>Completing payments on the counter at Alfa Group outlets</td></tr><tr><td>20</td><td>generate_payment_indomaret</td><td>Generate a payment code for cash payments at Indomaret outlets</td><td>Completing payments on the counter at Indomaret outlets</td></tr></tbody></table>

#### Examples

1. **Show Payment Methods**

<figure><img src="https://3092822868-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FqCxtvLoJNNxvp4U7kLHd%2Fuploads%2FOMujInRMlKdMDYMwCjYb%2Fimage.png?alt=media&#x26;token=b0a5bddf-d207-4338-aeb0-08da70221de4" alt=""><figcaption><p>Tool: get_merchant_payment_methods</p></figcaption></figure>

2. **Virtual Account Payment**

<figure><img src="https://3092822868-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FqCxtvLoJNNxvp4U7kLHd%2Fuploads%2FXLsJEjCN7udkQccqaZA8%2Fimage.png?alt=media&#x26;token=a765775c-f395-4b48-a71b-fa801a92658f" alt=""><figcaption><p>Tool: generate_payment_virtual_account</p></figcaption></figure>

### Transaction Utility

<table><thead><tr><th width="100">No.</th><th>Tool Name</th><th>Description</th><th>Use Case</th></tr></thead><tbody><tr><td>21</td><td>get_transaction_by_invoice_number</td><td>Retrieve the transaction details such as status, amount, and payment method used based on the invoice number</td><td>Tracking if an order has been paid</td></tr><tr><td>22</td><td>get_transaction_by_customer_name</td><td>Retrieve the transaction details such as status, amount, and payment method used based on the customer name</td><td>Viewing all orders from a particular customer</td></tr><tr><td>23</td><td>get_transaction_by_date_range</td><td>Retrieve the transaction details such as status, amount, and payment method used within a specified start and end date</td><td>Monthly reconciliation for accounting</td></tr></tbody></table>

### Customer Utility

<table><thead><tr><th width="100">No.</th><th>Tool Name</th><th>Description</th><th>Use Case</th></tr></thead><tbody><tr><td>24</td><td>add_customer</td><td>Create a new customer with details like name, email, and phone</td><td>Registering a new buyer before issuing an invoice</td></tr><tr><td>25</td><td>update_customer</td><td>Update existing customer details (e.g., phone, email)</td><td>Correcting customer contact info</td></tr><tr><td>26</td><td>delete_customer</td><td>Remove a customer from your records.</td><td>Cleaning inactive or duplicate customer data</td></tr><tr><td>27</td><td>get_customer_by_id</td><td>Retrieve customer details using their unique customer ID</td><td>Checking details of a returning customer</td></tr><tr><td>28</td><td>get_customer_by_name</td><td>Retrieve customer details using the customer’s full name or partial name</td><td>Finding a repeat customer without knowing their ID</td></tr><tr><td>29</td><td>get_customer_by_email</td><td>Retrieve customer details using the customer’s registered email address</td><td>Identifying a customer using their email contact</td></tr><tr><td>30</td><td>get_all_customers</td><td>Retrieve all customers linked to your merchant account</td><td>Viewing your full customer base</td></tr></tbody></table>

***

## Use Cases

With DOKU MCP, an AI chatbot can generate a QRIS payment request when a user wants to buy something, send the QR code, poll or receive webhook updates about payment status, and notify the user once paid; similarly, a virtual account flow can be generated, monitored, and confirmed via the agent. Visit [DOKU Docs](https://docs.doku.com/accept-payments/integration-tools/doku-mcp-server#use-cases) to learn how these tools may be used in practice.

<p align="center"><a href="https://docs.doku.com/accept-payments/integration-tools/doku-mcp-server#use-cases" class="button primary">Explore Use Cases</a></p>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://developers.doku.com/accept-payments/doku-mcp-server.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
