API Integration Patterns: Enterprise Architecture Best Practices
Modern businesses run on integrations. The average mid-market company uses 110+ software applications, and each one needs to exchange data with others to deliver value. Your ecommerce platform needs to talk to your ERP. Your ERP needs to talk to your warehouse management system. Your marketing automation needs customer data from your CRM. Your accounting system needs transaction data from your payment processor. Every connection is an integration, and every integration is an API conversation.
The difference between a business that scales smoothly and one that drowns in integration debt comes down to architectural patterns. Companies that implement well-designed integration patterns spend 60% less time maintaining integrations and experience 80% fewer integration-related outages than those that build point-to-point connections without a coherent strategy.
Key Takeaways
- REST remains the dominant API style for external integrations, but GraphQL and gRPC each serve specific use cases better
- Event-driven architecture (webhooks, message queues) decouples systems and eliminates polling, reducing integration latency from minutes to seconds
- The saga pattern manages distributed transactions across multiple services without distributed locks — essential for operations like order fulfillment that span ERP, payment, and warehouse systems
- API gateways centralize cross-cutting concerns (authentication, rate limiting, monitoring) and reduce per-integration overhead by 40-60%
- Rate limiting is not just politeness — it protects both your systems and the systems you integrate with from cascading failures
- API versioning strategy must be decided before your first consumer, not after breaking changes force the conversation
- The integration layer is the most fragile part of most enterprise architectures — invest in monitoring, error handling, and retry logic from day one
API Styles: REST vs GraphQL vs gRPC
The three dominant API styles each optimize for different characteristics. Choosing the right one for each integration context prevents architectural mismatches that cause performance problems and maintenance overhead.
REST (Representational State Transfer)
REST is the most widely adopted API style, using HTTP methods (GET, POST, PUT, PATCH, DELETE) to operate on resources identified by URLs. Its simplicity, ubiquity, and tooling support make it the default choice for most integrations.
When REST is the right choice:
- Public APIs consumed by external developers
- Standard CRUD operations on business entities
- Integrations where simplicity and wide tooling support matter
- APIs that will be consumed by many different clients (web, mobile, partners)
REST best practices for enterprise:
- Use nouns for resources, HTTP methods for actions:
GET /orders/123notGET /getOrder?id=123 - Consistent response format: Always return the same envelope structure (
{ data, meta, errors }) - Pagination for collections: Use cursor-based pagination (
?cursor=abc123&limit=50) for large datasets, not offset-based (?page=5&per_page=50) which becomes slow at high offsets - HATEOAS for discoverability: Include links to related resources in responses (
{ "order": { ..., "links": { "customer": "/customers/456", "invoices": "/orders/123/invoices" }}}) - Consistent error format: Return structured errors with machine-readable codes, human-readable messages, and documentation links
GraphQL
GraphQL allows clients to request exactly the data they need in a single query, avoiding the over-fetching and under-fetching problems of REST. The client defines the response shape.
When GraphQL is the right choice:
- Mobile applications where bandwidth is constrained
- Frontend applications that need flexible data from multiple related entities in one request
- APIs where different consumers need different subsets of the same data
- Rapid frontend development where the API contract should not constrain the UI
When GraphQL is the wrong choice:
- Simple CRUD APIs with predictable access patterns
- Server-to-server integrations where response shape is fixed
- APIs that need aggressive caching (REST's URL-based caching is simpler)
- Teams without GraphQL expertise (learning curve is steeper than REST)
GraphQL enterprise considerations:
- Authorization complexity: Field-level authorization is required — a customer should not be able to query
user { creditCardNumber }just because the schema exposes it - Query cost analysis: Without depth and complexity limits, a single GraphQL query can consume enormous server resources. Implement query cost estimation and reject expensive queries
- N+1 problem: Naive GraphQL resolvers generate one database query per field per item. Use DataLoader pattern for batching
- Caching: GraphQL's single endpoint makes HTTP caching ineffective. Use application-level caching (Redis) or persisted queries
gRPC
gRPC uses Protocol Buffers for schema definition and binary serialization, with HTTP/2 for transport. It is significantly faster than REST for high-volume, low-latency communications.
When gRPC is the right choice:
- Internal service-to-service communication in microservices architectures
- High-throughput, low-latency requirements (10,000+ requests/second)
- Streaming data (bidirectional streaming for real-time updates)
- Polyglot environments where services are written in different languages (gRPC generates client code for 10+ languages from a single .proto definition)
When gRPC is not suitable:
- Public APIs (browser support is limited, tooling is less accessible)
- Simple integrations where REST's simplicity outweighs gRPC's performance
- Environments where debugging with standard HTTP tools (curl, Postman) is important
Comparison Summary
| Characteristic | REST | GraphQL | gRPC |
|---|---|---|---|
| Transport | HTTP/1.1 or HTTP/2 | HTTP (single endpoint) | HTTP/2 |
| Serialization | JSON (text) | JSON (text) | Protocol Buffers (binary) |
| Schema | OpenAPI/Swagger (optional) | SDL (required) | .proto (required) |
| Performance | Good | Good (with optimization) | Excellent |
| Browser support | Full | Full | Limited (requires proxy) |
| Tooling | Extensive | Growing | Moderate |
| Caching | HTTP caching (excellent) | Application-level | Application-level |
| Best for | External APIs, CRUD | Flexible data needs | High-throughput internal |
Event-Driven Architecture
Request-response APIs (REST, GraphQL, gRPC) require the consumer to ask for information. Event-driven architecture inverts this — producers publish events when state changes occur, and interested consumers react to those events. This fundamental shift eliminates polling, reduces coupling, and enables real-time data flow across systems.
Webhooks
Webhooks are the simplest form of event-driven integration. When an event occurs in System A, it makes an HTTP POST request to a URL registered by System B.
Common ecommerce webhook scenarios:
- Stripe sends
payment_intent.succeededto your order management service - Shopify sends
orders/createto your ERP for fulfillment processing - Odoo sends
stock.move/confirmedto your warehouse management system - Your CRM sends
deal.wonto your accounting system for invoice creation
Webhook best practices:
- Verify webhook signatures: Every webhook provider includes a signature header (HMAC-SHA256 hash). Verify it before processing to prevent spoofed webhooks
- Respond quickly, process later: Return 200 immediately, then process the webhook payload asynchronously. Long-running processing risks timeout, and the sender will retry (causing duplicates)
- Idempotency: Webhooks can be delivered multiple times (provider retries on network failure). Design your handlers to be idempotent — processing the same webhook twice should not create duplicate records
- Retry handling: Store incoming webhooks with their processing status. If processing fails, implement your own retry mechanism rather than depending on the provider's retry schedule
- Dead letter queue: After maximum retries, move failed webhooks to a dead letter queue for manual investigation rather than silently dropping them
Message Queues
For higher-volume event flows and scenarios requiring guaranteed delivery, message queues (RabbitMQ, Apache Kafka, AWS SQS/SNS, Google Pub/Sub) provide robust event distribution.
When to use message queues over webhooks:
- Internal service-to-service communication (webhooks are better for external provider integration)
- High event volume (1,000+ events/minute)
- Need for guaranteed delivery with configurable retry policies
- Fan-out scenarios where one event triggers actions in multiple consumers
- Event replay capability (Kafka retains events and allows consumers to replay from any point)
Message queue patterns:
Point-to-point (Queue): One producer, one consumer. Used when exactly one service should process each event. Example: Order created → Fulfillment service processes (only one fulfillment action per order).
Publish-Subscribe (Topic): One producer, multiple consumers. Each consumer gets a copy of every event. Used for fan-out scenarios. Example: Order created → Inventory service reserves stock AND Email service sends confirmation AND Analytics service records event.
Example architecture: Order fulfillment
┌──────────┐ order.created ┌──────────────┐
│ Commerce │ ──────────────────────► │ Message Bus │
│ Service │ │ (Kafka/SQS) │
└──────────┘ └──────┬───────┘
│
┌──────────────────────┬┴──────────────────┐
│ │ │
┌─────▼──────┐ ┌───────▼──────┐ ┌──────▼───────┐
│ Inventory │ │ Payment │ │ Email │
│ Service │ │ Service │ │ Service │
│ (reserve) │ │ (capture) │ │(confirmation)│
└────────────┘ └──────────────┘ └──────────────┘
Event Schema Design
Consistent event schemas across your organization reduce integration friction:
{
"event_id": "evt_abc123xyz",
"event_type": "order.created",
"timestamp": "2026-03-23T14:30:00Z",
"version": "2.0",
"source": "commerce-service",
"data": {
"order_id": "ORD-2026-00142",
"customer_id": "CUST-789",
"total_amount": 249.99,
"currency": "USD",
"line_items": [...]
},
"metadata": {
"correlation_id": "req_xyz789",
"trace_id": "trace_abc456"
}
}
Key elements:
- event_id: Unique identifier for idempotency checking
- event_type: Dot-notated type following
{entity}.{action}convention - version: Schema version for backward compatibility
- source: Producing service identifier
- correlation_id: Links related events across services for debugging
The Saga Pattern for Distributed Transactions
In monolithic applications, business operations that span multiple steps (create order, reserve inventory, charge payment, create shipment) run in a single database transaction — if any step fails, the entire operation rolls back atomically.
In distributed systems where each step involves a different service with its own database, traditional transactions do not work. The saga pattern provides an alternative by breaking the operation into a sequence of local transactions with compensating transactions for rollback.
Choreography Saga
Each service listens for events and decides what to do next. There is no central coordinator.
Example: Order fulfillment saga (choreography)
- Commerce Service creates order → publishes
order.created - Inventory Service hears
order.created→ reserves stock → publishesstock.reserved - Payment Service hears
stock.reserved→ captures payment → publishespayment.captured - Fulfillment Service hears
payment.captured→ creates shipment → publishesshipment.created
If payment fails:
3. Payment Service hears stock.reserved → payment fails → publishes payment.failed
4. Inventory Service hears payment.failed → releases reserved stock (compensating transaction)
5. Commerce Service hears payment.failed → marks order as failed → notifies customer
Advantages: Simple, no single point of failure, natural fit for event-driven systems. Disadvantages: Difficult to track overall saga state, debugging requires correlating events across services, adding new steps requires modifying existing services.
Orchestration Saga
A central orchestrator service coordinates the saga steps, sending commands to each service and handling responses.
Example: Order fulfillment saga (orchestration)
┌──────────────────────────────┐
│ Order Orchestrator │
│ │
│ 1. Reserve inventory ───────┼──► Inventory Service
│ ◄── stock.reserved ──────┤
│ │
│ 2. Capture payment ─────────┼──► Payment Service
│ ◄── payment.captured ────┤
│ │
│ 3. Create shipment ─────────┼──► Fulfillment Service
│ ◄── shipment.created ────┤
│ │
│ On any failure: │
│ - Compensate previous steps │
│ - Update order status │
│ - Notify customer │
└──────────────────────────────┘
Advantages: Clear visibility into saga state, easier debugging, adding new steps only requires changing the orchestrator. Disadvantages: Single point of failure (mitigate with redundancy), orchestrator can become a bottleneck, more complex initial implementation.
Recommendation: Use orchestration for complex sagas (5+ steps, multiple conditional paths) and choreography for simple sagas (2-3 steps, linear flow).
API Gateway Architecture
An API gateway sits between API consumers and backend services, handling cross-cutting concerns that every API needs but that should not be duplicated in every service.
Gateway Responsibilities
Authentication and authorization: Verify JWT tokens, API keys, or OAuth tokens once at the gateway rather than in every backend service. The gateway adds verified identity information to forwarded requests.
Rate limiting: Protect backend services from overload by enforcing rate limits per consumer. Different consumers (internal services, partners, public developers) get different rate limits.
Request routing: Route incoming requests to the appropriate backend service based on URL path, headers, or request content. This decouples the public API structure from the internal service architecture.
Response caching: Cache responses for frequently requested, slowly changing data (product catalogs, configuration). Reduces backend load and improves response time.
Request/response transformation: Translate between public API formats and internal service formats. The public API can remain stable even when internal service APIs change.
Monitoring and logging: Centralized logging of all API traffic for debugging, analytics, and compliance.
Gateway Options
| Gateway | Type | Best For | Starting Price |
|---|---|---|---|
| Kong | Open-source / Enterprise | Kubernetes-native, plugin ecosystem | Free (OSS) |
| AWS API Gateway | Managed | AWS-native services, serverless | Pay per request |
| Cloudflare Workers | Edge-compute | Low latency, global distribution | $5/month |
| Azure API Management | Managed | Microsoft ecosystem, enterprise | $50/month |
| Traefik | Open-source | Docker/Kubernetes, auto-discovery | Free (OSS) |
| Express Gateway | Open-source | Node.js ecosystems, lightweight | Free |
Backend for Frontend (BFF) Pattern
A specialized form of API gateway where each frontend application (web, mobile, partner portal) gets its own dedicated gateway service. The BFF aggregates calls to multiple backend services and returns exactly the data that frontend needs.
Why BFF over a single gateway:
- Mobile needs different response shapes than web (smaller payloads, different field sets)
- Partner portal needs different authorization rules than customer-facing web
- Each frontend team can evolve their BFF independently
This is the pattern ECOSIRE uses for headless ERP implementations — a NestJS BFF layer that aggregates Odoo API calls and serves a Next.js frontend with exactly the data each page component needs.
Rate Limiting Strategies
Rate limiting is both a security mechanism and a reliability mechanism. Without it, a single misbehaving integration can overwhelm your API, causing downtime for all consumers.
Rate Limiting Algorithms
Fixed window: Count requests in fixed time windows (e.g., 100 requests per minute). Simple but allows bursts at window boundaries (200 requests in 2 seconds spanning a window boundary).
Sliding window: Weighted average of current and previous window counts. Smoother rate enforcement than fixed window.
Token bucket: Tokens accumulate at a fixed rate (e.g., 10 tokens/second). Each request consumes one token. Allows controlled bursts (up to bucket capacity) while enforcing average rate. Most common implementation.
Leaky bucket: Requests enter a queue processed at a fixed rate. Excess requests are rejected. Provides the smoothest output rate but adds latency.
Rate Limit Configuration
| Consumer Type | Recommended Limit | Burst Allowance |
|---|---|---|
| Public API (unauthenticated) | 30 requests/minute | 10 requests burst |
| Authenticated users | 100 requests/minute | 30 requests burst |
| Partner integrations | 1,000 requests/minute | 100 requests burst |
| Internal services | 10,000 requests/minute | 1,000 requests burst |
| Webhook deliveries | 500 deliveries/minute | N/A (queued) |
Rate Limit Response Headers
Include rate limit information in response headers so consumers can self-throttle:
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 847
X-RateLimit-Reset: 1711209600
Retry-After: 30
When rate limited, return HTTP 429 (Too Many Requests) with a Retry-After header indicating when the consumer can retry.
API Versioning
APIs evolve. New fields are added, behaviors change, and breaking changes sometimes cannot be avoided. Your versioning strategy determines how gracefully these changes are communicated to consumers.
Versioning Strategies
URL path versioning (/v1/orders, /v2/orders): Most explicit, easiest for consumers to understand and implement. The recommended approach for most APIs.
Header versioning (Accept: application/vnd.company.v2+json): Cleaner URLs but less discoverable. Harder to test in browser or with simple tools.
Query parameter versioning (/orders?version=2): Easy to implement but pollutes the query string and conflicts with caching.
Breaking vs. Non-Breaking Changes
Non-breaking (backward compatible):
- Adding new optional fields to responses
- Adding new optional parameters to requests
- Adding new endpoints
- Adding new enum values (if consumers handle unknown values gracefully)
Breaking:
- Removing or renaming fields
- Changing field types
- Changing the meaning of existing fields
- Making optional parameters required
- Changing URL paths or HTTP methods
- Modifying authentication requirements
Versioning Best Practices
- Start with versioning from day one: Adding versioning later is painful for existing consumers
- Maintain at most 2 active versions: Each additional version multiplies maintenance burden
- Deprecation timeline: Announce deprecation 6+ months before removing a version. Include deprecation notices in response headers (
Sunset: 2027-01-01) - Version the contract, not the implementation: All versions can share the same backend code with response transformation layers
- Document migration guides: For each version bump, provide a detailed guide explaining what changed and how to update
Error Handling and Retry Patterns
Structured Error Responses
Every API error response should include:
{
"error": {
"code": "INSUFFICIENT_INVENTORY",
"message": "Requested quantity (10) exceeds available stock (3) for product SKU-12345",
"status": 422,
"details": {
"product_id": "SKU-12345",
"requested": 10,
"available": 3
},
"documentation_url": "https://api.example.com/docs/errors#INSUFFICIENT_INVENTORY",
"request_id": "req_abc123"
}
}
Retry with Exponential Backoff
For transient failures (network errors, 503 Service Unavailable, 429 Too Many Requests), implement retry with exponential backoff and jitter:
Retry intervals: 1s, 2s, 4s, 8s, 16s (exponential) + random jitter (0-1s) to prevent thundering herd
Maximum retries: 5 attempts for API calls, 10 attempts for webhook deliveries
Circuit breaker: After consecutive failures exceed a threshold (e.g., 5 failures in 1 minute), stop retrying and fail fast for 30 seconds before attempting again. This prevents overwhelming an already-struggling service.
Dead Letter Queues
After maximum retries are exhausted, move failed requests to a dead letter queue rather than silently dropping them. Dead letter queues enable:
- Manual investigation of persistent failures
- Bulk replay after the underlying issue is resolved
- Alerting on dead letter queue depth (early warning of integration problems)
Frequently Asked Questions
Should I use REST or GraphQL for my API?
Use REST for public APIs, simple CRUD operations, and server-to-server integrations where response shapes are predictable. Use GraphQL when you have multiple frontend consumers that need different data subsets from the same API, or when reducing HTTP round trips is critical (mobile applications). Many organizations use both — REST for external APIs and GraphQL for internal frontend-to-backend communication.
How do I integrate Odoo with other business systems?
Odoo provides JSON-RPC, XML-RPC, and REST APIs (Odoo 17+) for integration. For real-time integration, build a middleware layer (NestJS, FastAPI) that consumes Odoo's APIs and exposes them to other systems. For event-driven integration, use Odoo's automated actions to trigger webhooks when records change. ECOSIRE specializes in Odoo integration architecture — see our integration services.
What is the difference between webhooks and message queues?
Webhooks are HTTP callbacks — System A makes an HTTP POST to System B when an event occurs. They are simple and widely supported but lack guaranteed delivery. Message queues (RabbitMQ, Kafka, SQS) store events persistently and deliver them with configurable retry, ordering, and fan-out guarantees. Use webhooks for external provider integration (Stripe, Shopify); use message queues for internal service-to-service communication.
How do I handle API rate limits from third-party providers?
Implement a request queue that respects the provider's rate limits. Track your request count using a token bucket algorithm synchronized with the provider's rate limit window. Cache responses aggressively to reduce API calls. For webhook-heavy integrations, process webhooks asynchronously so the HTTP response returns immediately regardless of processing time.
Should I build a custom API gateway or use a managed service?
For most businesses, a managed API gateway (AWS API Gateway, Cloudflare Workers, Azure APIM) is the right choice — less operational overhead, built-in scaling, and pre-built features for authentication, rate limiting, and monitoring. Build a custom gateway only if you have specific requirements that managed services cannot meet (custom authentication protocols, complex request transformation, or strict data residency requirements).
How do I version APIs without breaking existing integrations?
Use URL path versioning (/v1/, /v2/) and maintain backward compatibility within a version. Make additive changes (new fields, new endpoints) without incrementing the version. Only create a new version when breaking changes are unavoidable. Communicate deprecation timelines well in advance (6+ months) and provide migration documentation.
What monitoring should I have for API integrations?
Monitor five key metrics: error rate (percentage of 4xx/5xx responses), latency (p50, p95, p99), throughput (requests per second), availability (uptime percentage), and saturation (how close are you to rate limits or capacity). Set alerts on error rate spikes, latency increases above baseline, and dead letter queue depth. Distributed tracing (OpenTelemetry, Jaeger) is essential for debugging issues that span multiple services.
Building Resilient Integrations
API integration architecture is the connective tissue of your business technology stack. The patterns you choose — request-response vs. event-driven, synchronous vs. asynchronous, centralized gateway vs. point-to-point — determine how resilient, maintainable, and scalable your integrations will be as your business grows.
Start with clear API contracts, invest in error handling and retry logic from day one, and monitor your integration layer with the same rigor as your core application services.
ECOSIRE's integration services help businesses design and implement enterprise integration architectures — connecting Odoo ERP, Shopify commerce, payment processors, and third-party services with patterns that scale. Contact us to discuss your integration architecture.
Written by
ECOSIRE TeamTechnical Writing
The ECOSIRE technical writing team covers Odoo ERP, Shopify eCommerce, AI agents, Power BI analytics, GoHighLevel automation, and enterprise software best practices. Our guides help businesses make informed technology decisions.
Related Articles
Composable Commerce: The MACH Architecture Guide for 2026
Master composable commerce with MACH architecture in 2026. Learn microservices, API-first, cloud-native, headless strategies for scalable ecommerce.
Headless ERP: Why API-First Architecture is the Future
Discover why headless ERP with API-first architecture delivers faster integrations, better UX, and future-proof operations. Odoo headless guide included.
Odoo REST API: Practical Examples and Integration Tutorial
Practical Odoo REST API tutorial with authentication, CRUD operations, search filters, batch operations, and real-world Node.js and Python examples.