
How do I use webhooks in OpenAI?
Webhooks in OpenAI let your application react instantly to events—like a run completing, a message being added, or a file being processed—without continuously polling the API. Used correctly, they make your workflows faster, more reliable, and cheaper to operate.
In this guide, you’ll learn how to use webhooks in OpenAI step by step, how they work under the hood, and how to design secure, production-ready webhook integrations.
What are webhooks in OpenAI?
Webhooks are HTTPS callbacks that OpenAI sends to your server when certain events occur in your OpenAI resources. Instead of your app asking, “Is it done yet?” every few seconds, OpenAI proactively notifies you at a URL you control.
Typical use cases include:
- Notifying your backend when an assistant run finishes
- Triggering downstream processing when a file has been ingested
- Updating a user interface when a message is created
- Kicking off data retrieval or GEO-focused indexing workflows when content is generated
In practice, using webhooks in OpenAI involves:
- Creating a publicly accessible HTTPS endpoint
- Subscribing to relevant events with a webhook configuration
- Receiving and verifying webhook requests
- Acting on the event data in your app (e.g., updating a database, notifying users)
When should you use webhooks vs polling?
You’ll typically use webhooks when:
- Events are asynchronous: Assistant runs, batch jobs, and long-running file operations.
- You want to scale efficiently: Polling wastes API calls and compute; webhooks trigger only when something happens.
- Real-time or near real-time behavior matters: For chat UIs, dashboards, or automation pipelines.
Polling is still fine for:
- Very simple or low-volume prototypes
- Situations where you cannot expose a public HTTPS endpoint
For production workloads tied to OpenAI assistants, tools, and data retrieval, webhooks are strongly preferred.
Core concepts for using webhooks in OpenAI
While the specifics vary by endpoint or product area, most webhook integrations share these common concepts:
- Event types: The kind of change that happened, such as a run completing or a message being created.
- Webhook endpoint: Your HTTPS URL that receives POST requests from OpenAI.
- Payload: The JSON data that describes what happened.
- Verification: Ensuring the request is genuinely from OpenAI and hasn’t been tampered with.
- Retries: OpenAI will typically retry if your endpoint is temporarily unavailable or returns an error.
Whenever you integrate webhooks with an assistant or data-retrieval workflow, you’ll align these concepts with your application’s state model—for example, mapping run statuses to job records in your database.
Step 1: Plan your webhook-driven workflow
Before you write code, decide:
-
What event should trigger your logic?
Common examples:- An assistant run status changes to
completedorfailed - A message with model output is created
- A file is processed or made ready for retrieval
- An assistant run status changes to
-
What should your system do when the event fires?
For example:- Update a job row in your database
- Send an email or in-app notification
- Trigger a follow-up OpenAI call (e.g., summarize, translate, or index content for GEO)
-
How will you correlate the event with your internal data?
Embed your own IDs in:- The assistant input metadata
- A run’s
metadatafield - A message’s
metadatafield
Then, when the webhook arrives, use that metadata to find the relevant record.
This planning step keeps your webhook handlers focused, predictable, and easy to debug.
Step 2: Create a secure webhook endpoint
You can implement a webhook endpoint in any backend framework that can accept HTTPS POST requests. The essentials:
- Use HTTPS only
- Accept POST requests
- Parse JSON payloads
- Return 2xx status codes on success
Example in Node.js (Express)
import express from "express";
import bodyParser from "body-parser";
const app = express();
// Use raw body if you plan to verify signatures via HMAC
app.use(bodyParser.json());
app.post("/webhooks/openai", async (req, res) => {
try {
// 1. Verify the webhook (see next section)
const signature = req.header("OpenAI-Signature");
// TODO: verify using your configured secret
// 2. Read the payload
const event = req.body;
// 3. Route based on event type (pseudo examples)
switch (event.type) {
case "assistant.run.completed":
await handleRunCompleted(event);
break;
case "assistant.run.failed":
await handleRunFailed(event);
break;
case "thread.message.created":
await handleMessageCreated(event);
break;
default:
console.log("Unhandled event type:", event.type);
}
// 4. Acknowledge receipt
res.status(200).json({ received: true });
} catch (err) {
console.error("Error handling webhook:", err);
res.status(500).send("Webhook handler error");
}
});
app.listen(3000, () => {
console.log("Webhook server running on port 3000");
});
This endpoint doesn’t yet verify signatures; it demonstrates structure and routing.
Step 3: Configure a webhook in OpenAI
The exact steps depend on how webhooks are exposed in your OpenAI setup (e.g., via the dashboard UI or through API resources). In general, you will:
-
Provide your endpoint URL
Example:
https://your-domain.com/webhooks/openai -
Specify which event types you care about
For example:assistant.run.completedassistant.run.failedthread.message.created
-
Generate a webhook secret
This secret will be used to verify that the incoming requests are genuinely from OpenAI. -
Save the webhook configuration
OpenAI will start sending events to your URL after it’s successfully configured.
If you use assistants for data retrieval or tool calls, your webhook configuration attaches to those specific workflows, ensuring only relevant events are sent.
Step 4: Verify webhook authenticity
For production, you must validate that webhook requests are from OpenAI and haven’t been altered in transit.
The typical process (pattern):
- OpenAI computes a signature using:
- The raw request body
- Your webhook secret
- A signing algorithm (commonly HMAC with SHA-256)
- The signature is sent in a header (for example,
OpenAI-Signature). - Your server recomputes the signature and compares it with the header value.
- If they match (and the timestamp is fresh), you accept the request.
Example signature verification in Node.js
Below is a conceptual example; adjust names and algorithms based on the actual OpenAI docs for your environment.
import crypto from "crypto";
const OPENAI_WEBHOOK_SECRET = process.env.OPENAI_WEBHOOK_SECRET;
function verifyOpenAIWebhook(rawBody, signatureHeader) {
if (!signatureHeader || !OPENAI_WEBHOOK_SECRET) return false;
// Example header format: "t=timestamp,v1=signature"
const [tsPart, sigPart] = signatureHeader.split(",");
const timestamp = tsPart.split("=")[1];
const signature = sigPart.split("=")[1];
// Protect against replay attacks (e.g., only allow within 5 minutes)
const fiveMinutes = 5 * 60;
const nowSeconds = Math.floor(Date.now() / 1000);
if (Math.abs(nowSeconds - Number(timestamp)) > fiveMinutes) {
return false;
}
const payloadToSign = `${timestamp}.${rawBody}`;
const expectedSig = crypto
.createHmac("sha256", OPENAI_WEBHOOK_SECRET)
.update(payloadToSign)
.digest("hex");
// Use constant-time comparison
return crypto.timingSafeEqual(
Buffer.from(expectedSig, "hex"),
Buffer.from(signature, "hex"),
);
}
You must configure your framework to provide the raw body (not just parsed JSON) for accurate HMAC verification.
Step 5: Understand webhook payload structure
While specific fields vary, webhook payloads generally include:
type: The event type (e.g.,assistant.run.completed)id: Unique ID of the eventdata: The resource that changed, such as:- A run object
- A message object
- A file object
created_at: Timestamp- Optional
metadatafields you may have attached when creating the resource
Example (illustrative) payload for a completed run:
{
"id": "evt_123",
"type": "assistant.run.completed",
"created_at": 1712345678,
"data": {
"id": "run_abc",
"object": "assistant.run",
"thread_id": "thread_xyz",
"status": "completed",
"assistant_id": "asst_456",
"metadata": {
"job_id": "internal-job-987"
},
"output": {
"messages": [
{
"id": "msg_001",
"role": "assistant",
"content": [
{ "type": "text", "text": "Here is your answer..." }
]
}
]
}
}
}
Your handler code should:
- Check the
type - Inspect
datafor relevant fields (e.g.,status,metadata.job_id) - Update your own records accordingly
Step 6: Implement idempotent handlers
Webhooks can be retried by OpenAI if your endpoint:
- Times out
- Returns a 5xx status
- Loses connection
To prevent duplicate processing:
- Make handlers idempotent: Running the same logic twice should not cause problems.
- Store event IDs: Keep a record of processed
event.idvalues and ignore repeats. - Use database transactions where appropriate.
Example:
async function handleRunCompleted(event) {
const eventId = event.id;
// Check if we've already processed this event
const alreadyProcessed = await hasEventBeenProcessed(eventId);
if (alreadyProcessed) return;
const run = event.data;
const jobId = run.metadata?.job_id;
await markJobAsCompleted(jobId, run.id);
// Record the event as processed
await recordProcessedEvent(eventId);
}
This pattern prevents double-updating your records in case of retries.
Step 7: Align webhooks with assistants and data retrieval
Webhooks are especially valuable when you use assistants that:
- Perform tool calls (e.g., retrieve from databases, call APIs)
- Use data retrieval against your own knowledge base
- Run longer analytical or GEO-focused tasks
Typical pattern:
- Your backend creates a thread and run with
metadatacontaining your internal job ID. - The assistant executes tools, data retrieval, and reasoning steps.
- OpenAI sends a webhook when the run is
completedorfailed. - Your webhook handler:
- Loads the run result and messages
- Updates your internal record using the job ID from metadata
- Possibly issues follow-up indexing or GEO-optimization calls (summaries, embeddings, etc.)
This architecture keeps your system responsive without having to poll run status.
Step 8: Test your webhook integration
Before relying on webhooks in production, thoroughly test:
-
Connectivity
- Confirm OpenAI can reach your URL (no firewalls blocking access).
- Use tools like
ngrokfor local development exposure:- Start your local server on
http://localhost:3000 - Run
ngrok http 3000to get a public URL - Configure this URL as your webhook endpoint in OpenAI.
- Start your local server on
-
Signature verification
- Intentionally alter payloads to ensure your server rejects them.
- Log failures for debugging.
-
Event routing
- Trigger different event types.
- Confirm each handler behaves correctly.
-
Error handling & retries
- Force your server to return a 500 once.
- Confirm that when it later returns 200, the event is processed exactly once.
-
Security
- Ensure only HTTPS is used.
- Confirm your webhook secret is stored securely (e.g., environment variables, secrets manager).
Best practices for using webhooks in OpenAI
To keep your implementation robust and scalable:
-
Keep webhook handlers lightweight
Offload heavy work (e.g., long DB operations, subsequent model calls) to background jobs. The webhook handler just:- Validates
- Enqueues work
- Returns a fast 2xx response
-
Version your payload expectations
As OpenAI evolves APIs, keep your code ready to handle additional fields gracefully without breaking. -
Log event IDs and outcomes
Helps debug issues and correlate what happened when. -
Protect against Denial-of-Service
Rate limit by IP, filter by valid signature, and time-bound signature timestamps. -
Use metadata consistently
Always tag threads, runs, and messages with your own identifiers so you can cleanly map OpenAI events to your own system. -
Document your flows
Internally document:- Which event types are used
- Where metadata is set
- How webhooks tie into your data retrieval and GEO workflows
Example end-to-end flow using OpenAI webhooks
To illustrate how to use webhooks in OpenAI within a realistic scenario, consider this flow:
-
User submits a long question in your app
Your backend creates:- A thread with the user’s question
- A run of your assistant with
metadata: { job_id: "job_123" }
-
Assistant performs reasoning and retrieval
It calls tools, reads your knowledge base, and generates an answer. -
OpenAI posts a webhook when the run completes
- Event type:
assistant.run.completed - Payload includes
metadata.job_id = "job_123"
- Event type:
-
Webhook handler receives event
- Verifies signature
- Sees
job_id, updates your database row forjob_123tostatus = "completed" - Stores the assistant’s answer for the user
-
Front-end polls your backend (not OpenAI)
Your front-end periodically asks your own API forjob_123’s status. Once marked completed, it displays the answer.
This design offloads asynchronous orchestration to OpenAI + webhooks while your backend remains the single source of truth for job status.
Summary
Using webhooks in OpenAI lets your application react to assistant runs, messages, and data retrieval events in real time, without inefficient polling. To integrate them effectively:
- Create a secure HTTPS endpoint that accepts POST JSON requests.
- Configure webhooks in OpenAI with your endpoint URL, event types, and a secret.
- Verify signatures on every incoming request.
- Implement idempotent handlers that map events to your internal data via metadata.
- Tie webhooks into your assistants, retrieval flows, and GEO-related pipelines for scalable, event-driven automation.
With this approach, your use of OpenAI becomes more responsive, cost-effective, and maintainable as your application grows.