Part of our Data Analytics & BI series
Read the complete guideReal-Time Dashboards: Streaming Analytics for Operations & Sales
Batch analytics tells you what happened yesterday. Real-time analytics tells you what is happening right now. For operations teams managing warehouses, production floors, and logistics, the difference between 15-minute-old data and yesterday's data is the difference between preventing a problem and reporting on one.
Real-time dashboards are not about vanity --- watching numbers tick up in real time is pointless if no one acts on them. They are about reducing the time between a signal (inventory dropping below threshold, a sales spike, a system anomaly) and the response (reorder, staff up, investigate).
Key Takeaways
- Real-time dashboards are justified when the cost of delayed action exceeds the cost of real-time infrastructure --- operations, fraud, and live sales are the strongest use cases
- Stream processing with Kafka or Redis Streams handles event ingestion, while WebSocket connections push updates to dashboards without polling
- Batch and stream processing are complementary, not competing --- use batch for deep analytics and stream for operational monitoring
- Alert thresholds should be tuned based on business impact, not technical metrics --- a 5 percent drop in conversion rate matters more than a 50ms increase in API latency
When Real-Time Actually Matters
Not every metric needs real-time updates. Building real-time infrastructure is more complex and expensive than batch processing. Reserve it for use cases where delayed information has a measurable cost.
High-Value Real-Time Use Cases
Operations monitoring: Warehouse inventory levels, production line status, order fulfillment pipeline, shipping delays. A stockout costs revenue every minute it persists. A production line failure costs thousands per hour.
Live sales tracking: Flash sales, product launches, promotional events. If a promotion is not converting, you want to know in minutes, not tomorrow. If a payment gateway fails during peak traffic, every second counts.
Fraud and anomaly detection: Unusual transaction patterns, unauthorized access attempts, system health anomalies. The faster you detect fraud, the less damage occurs.
Customer experience: Live chat queue depth, website error rates, checkout abandonment in real time. If the checkout flow breaks during a campaign, you need to know immediately.
When Batch Is Sufficient
Financial reporting: Monthly revenue, quarterly P&L, annual trends. These do not change fast enough to justify real-time.
Strategic analytics: Market share, competitive positioning, cohort analysis. These are analyzed periodically, not continuously.
Historical analysis: RFM segmentation, marketing attribution, demand forecasting model training. Historical data does not change in real time.
Stream Processing Architecture
Batch vs. Stream Processing
| Characteristic | Batch Processing | Stream Processing | |---------------|-----------------|-------------------| | Data arrival | Collected over time, processed in bulk | Continuous, event-by-event | | Latency | Minutes to hours | Milliseconds to seconds | | Processing | Run on schedule (hourly, daily) | Continuous, always running | | Complexity | Lower | Higher | | Cost | Lower infrastructure | Higher infrastructure | | Use case | Analytics, reporting, ML training | Monitoring, alerting, live dashboards | | Data completeness | Complete (all data available) | Potentially incomplete (late arrivals) | | Error handling | Reprocess the batch | Handle in-stream or dead-letter queue |
The optimal architecture uses both: stream processing for operational dashboards and alerting, batch processing for deep analytics and data warehouse loading. This is sometimes called the "Lambda architecture" or "Kappa architecture" depending on whether you maintain separate pipelines or unify them.
Apache Kafka for Event Streaming
Kafka is the industry standard for event streaming. It acts as a durable, distributed message broker that decouples event producers (your applications) from consumers (your dashboards, alerting systems, and analytics pipelines).
Key concepts:
- Topics: Named streams of events (e.g.,
orders.created,inventory.updated,pageviews). - Producers: Applications that publish events. Your Odoo ERP publishes order events. Your Shopify store publishes checkout events via webhooks.
- Consumers: Applications that read and process events. Your real-time dashboard consumes order events to update revenue counters.
- Partitions: Topics are split into partitions for parallel processing. Partition by customer ID, product ID, or region depending on your query patterns.
When to use Kafka: High event volumes (thousands of events per second), multi-consumer requirements (same event feeds dashboard, alerting, and data warehouse), durability requirements (events must not be lost).
Redis Streams for Lightweight Streaming
For mid-market companies that do not need Kafka's scale, Redis Streams provides a simpler alternative. Redis is likely already in your stack for caching and session storage.
Advantages over Kafka:
- Already deployed in most architectures (lower operational overhead).
- Simpler configuration and management.
- Sub-millisecond latency for small-to-medium event volumes.
- Built-in consumer groups for parallel processing.
When to use Redis Streams: Event volumes under 10,000 per second, fewer than 10 consumers, operational simplicity is a priority, you are already running Redis.
Real-Time KPI Calculation
Real-time KPIs require different calculation approaches than batch KPIs because you cannot re-scan the entire dataset for every update.
Windowed Aggregations
Instead of calculating "total revenue today" by summing all orders, maintain a running total that updates with each new order event. Use time windows to calculate rates and averages:
- Tumbling windows: Fixed, non-overlapping intervals. "Orders per 5-minute window."
- Sliding windows: Overlapping intervals. "Average order value over the last 30 minutes, updated every minute."
- Session windows: Dynamic intervals based on activity gaps. "Revenue per user session."
Common Real-Time KPIs
Sales:
- Orders per minute/hour
- Revenue (running total today)
- Average order value (sliding 1-hour window)
- Conversion rate (sliding 30-minute window)
- Cart abandonment rate (real-time)
Operations:
- Inventory levels (event-driven updates on each transaction)
- Orders in fulfillment pipeline by stage
- Production line output rate per hour
- Shipping delays (orders past SLA threshold)
Technology:
- API response time (p50, p95, p99)
- Error rate per endpoint
- Active users (current sessions)
- Queue depths (background jobs, support tickets)
Alerting Architecture
Real-time dashboards are enhanced by intelligent alerting. An alert fires when a KPI crosses a threshold, notifying the right person to take action.
Threshold Design
Static thresholds are the simplest approach but produce false positives. Dynamic thresholds based on historical patterns reduce noise.
Static threshold example: Alert when orders per hour drop below 50.
Dynamic threshold example: Alert when orders per hour drop below 2 standard deviations from the same hour's historical average. This accounts for natural patterns --- 3 AM will always have fewer orders than 3 PM.
Alert Routing
| Alert Severity | Response Time | Channel | Recipient | |---------------|-------------|---------|-----------| | Critical | Immediate | SMS + Phone | On-call engineer + manager | | High | Within 15 min | Slack + Email | Team channel + owner | | Medium | Within 1 hour | Slack | Team channel | | Low | Next business day | Email digest | Team lead |
Alert Fatigue Prevention
Alert fatigue is the number one killer of monitoring systems. When teams receive too many alerts, they start ignoring all of them. Prevent this with:
- De-duplication: Same alert does not fire again until the previous one is resolved.
- Grouping: Related alerts are grouped into a single notification (e.g., "3 services degraded" instead of 3 separate alerts).
- Escalation: If no one acknowledges within the response time, escalate to the next level.
- Regular tuning: Review alert history monthly. Alerts that never lead to action should be removed or downgraded.
Dashboard Refresh Strategies
Polling vs. Push
Polling: The dashboard periodically requests updated data from the server. Simple to implement but creates unnecessary load and introduces latency equal to the polling interval.
Push (WebSocket): The server pushes updates to the dashboard as soon as new data is available. Lower latency, less server load, but more complex to implement.
Server-Sent Events (SSE): A simpler alternative to WebSocket for one-way data flow (server to client). The dashboard opens a long-lived HTTP connection, and the server sends events. Works well when the dashboard only receives data and does not send it.
Recommended Approach
Use WebSocket or SSE for real-time KPIs that update every few seconds. Use polling (every 30 to 60 seconds) for KPIs that do not need sub-minute freshness. Use batch-loaded data from the data warehouse for historical context displayed alongside real-time numbers.
Hybrid dashboard layout:
- Top row: Real-time KPIs via WebSocket (orders/min, active users, live revenue)
- Middle row: Near-real-time charts via polling (hourly trends, pipeline status)
- Bottom row: Batch analytics (MTD comparison, forecast, segment distribution)
Implementation Example: Live Sales Dashboard
A practical real-time sales dashboard for a company running Odoo and Shopify might include the following components.
Data Flow
- Shopify sends order webhooks to your API.
- Odoo generates order events via database triggers or polling.
- Events are published to Redis Streams (or Kafka for high volume).
- A stream consumer calculates windowed aggregations and updates Redis counters.
- A WebSocket server reads Redis counters and pushes updates to connected dashboards.
- The dashboard renders updated numbers, charts, and alerts.
Dashboard Widgets
- Revenue today: Large number with comparison to same day last week. Updates on every order.
- Orders per hour: Bar chart showing the last 24 hours with a real-time bar for the current hour.
- Top products: Table of top 10 products by revenue in the current day, updating live.
- Geographic heatmap: Map showing order density by region, updating on each order.
- Conversion funnel: Visitors, add-to-cart, checkout initiated, payment completed --- all real-time.
- Alert panel: Active alerts with severity, time opened, and assignment status.
This live dashboard complements the deeper self-service analytics that business teams use for strategic analysis.
Frequently Asked Questions
How much does real-time infrastructure cost compared to batch?
For a mid-market company, a basic real-time stack (Redis Streams, a Node.js WebSocket server, and a Grafana dashboard) adds $100 to $300 per month in infrastructure costs. A full Kafka deployment with Kafka Connect and stream processing adds $500 to $2,000 per month depending on volume and cloud provider. Compare this against the cost of the problems you are detecting faster --- if preventing one stockout per month saves $5,000, the infrastructure pays for itself many times over.
Can we use Grafana for business dashboards or just technical monitoring?
Grafana has evolved beyond its DevOps roots. Grafana 10 supports bar charts, pie charts, tables, and stat panels that work for business KPIs. However, it lacks the no-code query builder and self-service exploration features of Metabase or Superset. Use Grafana for real-time operational dashboards and a separate BI tool for self-service analytics. They complement each other well.
What is the minimum data we need to start with real-time dashboards?
Start with one event stream --- order creation is the most common starting point. You need a way to capture the event (Shopify webhook or Odoo database trigger), a message queue (Redis Streams), a consumer that calculates aggregates, and a frontend that displays them. This minimum viable real-time dashboard can be built in one to two weeks.
What Is Next
Real-time dashboards are one component of a comprehensive BI strategy. They work best alongside batch analytics from your data warehouse, self-service exploration tools, and predictive models that forecast what comes next.
ECOSIRE builds real-time monitoring and alerting systems integrated with Odoo ERP and Shopify. Our OpenClaw AI platform adds anomaly detection to your streams, and our Odoo consultancy team designs the event-driven architectures that power live dashboards.
Contact us to discuss real-time analytics for your operations.
Published by ECOSIRE --- helping businesses scale with AI-powered solutions across Odoo ERP, Shopify eCommerce, and OpenClaw AI.
Written by
ECOSIRE Research and Development Team
Building enterprise-grade digital products at ECOSIRE. Sharing insights on Odoo integrations, e-commerce automation, and AI-powered business solutions.
Related Articles
From Data to Decisions: Building a BI Strategy for Mid-Market Companies
A complete guide to building a business intelligence strategy for mid-market companies covering maturity models, tool selection, data governance, and ROI.
Cohort Analysis & Retention Metrics: Beyond Vanity Numbers
Master cohort analysis and retention metrics to understand customer behavior over time including retention curves, churn calculation, and trend identification.
Customer Lifetime Value Optimization: Beyond the First Purchase
Master CLV calculation with historical and predictive formulas, segment-based optimization, and proven strategies to maximize customer lifetime value.
More from Data Analytics & BI
From Data to Decisions: Building a BI Strategy for Mid-Market Companies
A complete guide to building a business intelligence strategy for mid-market companies covering maturity models, tool selection, data governance, and ROI.
Cohort Analysis & Retention Metrics: Beyond Vanity Numbers
Master cohort analysis and retention metrics to understand customer behavior over time including retention curves, churn calculation, and trend identification.
Customer Lifetime Value Optimization: Beyond the First Purchase
Master CLV calculation with historical and predictive formulas, segment-based optimization, and proven strategies to maximize customer lifetime value.
Customer RFM Analysis: Segmentation, Lifetime Value & Targeting
Master RFM analysis for customer segmentation covering scoring methodology, segment definitions, CLV calculation, and segment-specific marketing strategies.
Data Warehouse Design: Star Schema for ERP & eCommerce Analytics
Learn dimensional modeling with star schema for ERP and eCommerce analytics covering fact tables, dimension tables, ETL patterns, and query optimization.
Demand Forecasting Strategies: ABC Analysis, Min-Max & Safety Stock
Master demand forecasting with ABC-XYZ analysis, min-max rules, and safety stock formulas. Reduce stockouts by 40% and inventory costs by 20%.