Drizzle ORM with PostgreSQL: Complete Guide
Drizzle ORM occupies a unique position in the TypeScript database ecosystem: it's the ORM that's actually just a query builder in disguise, with schema-as-code that generates SQL you can read, and a migration system that treats your database like the source of truth. After years of Prisma and TypeORM, Drizzle feels like coming home to raw SQL — but with full TypeScript inference and zero runtime overhead.
This guide covers everything from initial setup to production patterns, drawing from a codebase with 65+ Drizzle schema files, multi-tenant queries, and a lazy proxy connection pattern that works reliably in both Next.js and NestJS contexts.
Key Takeaways
- Use a lazy Proxy for db connection — never create eager connections at module load time
- Local PostgreSQL should run on a non-default port (5433) to avoid conflicts with system installations
- All tables need
organizationIdfor multi-tenancy — enforce at the schema level- Drizzle enum comparisons need explicit TypeScript casting for type narrowing
- Never use
sql.raw()— always use thesqltemplate literal with parameterized values- Relations are separate from foreign keys — define both for full type safety
- Transactions wrap multi-step operations; pass the
txobject down to services- Schema index (
schema/index.ts) must export all tables for migrations to work
Installation and Setup
Drizzle requires two packages: the core ORM and the database driver.
pnpm add drizzle-orm pg
pnpm add -D drizzle-kit @types/pg
For the full stack including migrations and studio:
pnpm add drizzle-orm postgres
pnpm add -D drizzle-kit
The postgres package (different from pg) is the recommended driver for Drizzle in modern TypeScript projects — it supports connection pooling, prepared statements, and has better TypeScript types.
The Lazy Proxy Pattern
The most critical architectural decision with Drizzle in a Next.js or NestJS monorepo is never creating an eager database connection. Eager connections at module load time cause issues in:
- Next.js: modules are imported during the build process, where no DB is available
- NestJS: package imports happen before environment variables load
- Serverless: cold starts get delayed by connection overhead
The lazy Proxy pattern solves all of these:
// packages/db/src/index.ts
import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';
import * as schema from './schema';
let _db: ReturnType<typeof drizzle> | null = null;
function getDb() {
if (!_db) {
const connectionString = process.env.DATABASE_URL;
if (!connectionString) {
throw new Error('DATABASE_URL environment variable is not set');
}
const client = postgres(connectionString, {
max: 10, // connection pool size
idle_timeout: 20,
connect_timeout: 10,
});
_db = drizzle(client, { schema });
}
return _db;
}
// Export a Proxy that initializes the connection on first use
export const db = new Proxy({} as ReturnType<typeof drizzle>, {
get(_, prop) {
return getDb()[prop as keyof ReturnType<typeof drizzle>];
},
});
export * from './schema';
This pattern means importing @ecosire/db never opens a database connection. The connection is created only when the first query runs.
Schema Design
Drizzle schemas are plain TypeScript files that export table definitions. The discipline here is to keep related tables in the same file and export everything through a central index.
// packages/db/src/schema/contacts.ts
import {
pgTable,
uuid,
text,
varchar,
timestamp,
pgEnum,
boolean,
index,
} from 'drizzle-orm/pg-core';
export const contactTypeEnum = pgEnum('contact_type', ['individual', 'company', 'partner']);
export const contacts = pgTable(
'contacts',
{
id: uuid('id').defaultRandom().primaryKey(),
organizationId: uuid('organization_id').notNull(), // Multi-tenancy
name: varchar('name', { length: 255 }).notNull(),
email: varchar('email', { length: 255 }),
phone: varchar('phone', { length: 50 }),
type: contactTypeEnum('type').default('individual').notNull(),
isActive: boolean('is_active').default(true).notNull(),
notes: text('notes'),
createdAt: timestamp('created_at').defaultNow().notNull(),
updatedAt: timestamp('updated_at').defaultNow().notNull(),
},
(table) => ({
organizationIdx: index('contacts_organization_idx').on(table.organizationId),
emailIdx: index('contacts_email_idx').on(table.email),
})
);
export type Contact = typeof contacts.$inferSelect;
export type NewContact = typeof contacts.$inferInsert;
The $inferSelect and $inferInsert types give you full TypeScript inference — Drizzle knows which fields are required vs. optional at insert time based on your schema constraints.
// packages/db/src/schema/index.ts
export * from './contacts';
export * from './orders';
export * from './licenses';
export * from './products';
// ... all tables
The index file must export every table for drizzle-kit to pick them up during migration generation.
Relations
Drizzle separates foreign key constraints (enforced by PostgreSQL) from relation definitions (used by Drizzle's relational query API). You need both:
// packages/db/src/schema/relations.ts
import { relations } from 'drizzle-orm';
import { contacts } from './contacts';
import { orders } from './orders';
import { orderItems } from './order-items';
import { products } from './products';
export const contactsRelations = relations(contacts, ({ many }) => ({
orders: many(orders),
}));
export const ordersRelations = relations(orders, ({ one, many }) => ({
contact: one(contacts, {
fields: [orders.contactId],
references: [contacts.id],
}),
items: many(orderItems),
}));
export const orderItemsRelations = relations(orderItems, ({ one }) => ({
order: one(orders, {
fields: [orderItems.orderId],
references: [orders.id],
}),
product: one(products, {
fields: [orderItems.productId],
references: [products.id],
}),
}));
With relations defined, you can use Drizzle's relational query API:
// Fetch order with contact and items in one query
const order = await db.query.orders.findFirst({
where: eq(orders.id, orderId),
with: {
contact: true,
items: {
with: {
product: true,
},
},
},
});
Drizzle generates a single JOIN query — no N+1 problem.
Type-Safe Queries
Drizzle's query API is fully type-safe. The select type narrows based on which columns you include:
import { db } from '@ecosire/db';
import { contacts } from '@ecosire/db/schema';
import { eq, and, like, sql, count } from 'drizzle-orm';
// Full select — returns Contact[]
const allContacts = await db
.select()
.from(contacts)
.where(eq(contacts.organizationId, orgId))
.limit(50);
// Partial select — returns only specified columns
const contactNames = await db
.select({
id: contacts.id,
name: contacts.name,
email: contacts.email,
})
.from(contacts)
.where(
and(
eq(contacts.organizationId, orgId),
eq(contacts.isActive, true)
)
);
// Count query
const [{ total }] = await db
.select({ total: count() })
.from(contacts)
.where(eq(contacts.organizationId, orgId));
// Search with ILIKE (case-insensitive)
const searchResults = await db
.select()
.from(contacts)
.where(
and(
eq(contacts.organizationId, orgId),
like(contacts.name, `%${searchTerm}%`)
)
)
.limit(20);
For dynamic where clauses (filters from UI), build conditions conditionally:
import { SQL } from 'drizzle-orm';
async function searchContacts(orgId: string, filters: {
search?: string;
type?: 'individual' | 'company';
isActive?: boolean;
}) {
const conditions: SQL[] = [eq(contacts.organizationId, orgId)];
if (filters.search) {
conditions.push(like(contacts.name, `%${filters.search}%`));
}
if (filters.type) {
// Enum comparisons need explicit casting
conditions.push(eq(contacts.type, filters.type as 'individual' | 'company' | 'partner'));
}
if (filters.isActive !== undefined) {
conditions.push(eq(contacts.isActive, filters.isActive));
}
return db
.select()
.from(contacts)
.where(and(...conditions))
.limit(100);
}
Note the explicit cast on enum values: filters.type as 'individual' | 'company' | 'partner'. Drizzle's enum type doesn't automatically accept narrower string subtypes without the cast.
Insert, Update, and Delete
import { db } from '@ecosire/db';
import { contacts, NewContact } from '@ecosire/db/schema';
import { eq, and } from 'drizzle-orm';
// Insert with RETURNING
async function createContact(data: NewContact) {
const [created] = await db
.insert(contacts)
.values(data)
.returning();
return created; // Fully typed as Contact
}
// Upsert (insert or update on conflict)
async function upsertContact(data: NewContact) {
const [result] = await db
.insert(contacts)
.values(data)
.onConflictDoUpdate({
target: [contacts.organizationId, contacts.email],
set: {
name: data.name,
updatedAt: new Date(),
},
})
.returning();
return result;
}
// Update with RETURNING
async function updateContact(
id: string,
orgId: string,
updates: Partial<NewContact>
) {
const [updated] = await db
.update(contacts)
.set({ ...updates, updatedAt: new Date() })
.where(
and(
eq(contacts.id, id),
eq(contacts.organizationId, orgId) // Always filter by org
)
)
.returning();
return updated;
}
// Soft delete (recommended over hard delete)
async function deleteContact(id: string, orgId: string) {
return db
.update(contacts)
.set({ isActive: false, updatedAt: new Date() })
.where(
and(
eq(contacts.id, id),
eq(contacts.organizationId, orgId)
)
);
}
Always use .returning() on INSERT and UPDATE statements — it avoids a second SELECT round-trip to get the created/updated record.
Transactions
Multi-step operations that must succeed or fail together need transactions:
async function createOrderWithItems(
orderData: NewOrder,
items: NewOrderItem[]
) {
return db.transaction(async (tx) => {
// Create the order
const [order] = await tx
.insert(orders)
.values(orderData)
.returning();
// Create order items
const createdItems = await tx
.insert(orderItems)
.values(items.map((item) => ({ ...item, orderId: order.id })))
.returning();
// Update inventory (must succeed or entire transaction rolls back)
for (const item of items) {
await tx
.update(products)
.set({
stock: sql`${products.stock} - ${item.quantity}`,
})
.where(eq(products.id, item.productId));
}
return { order, items: createdItems };
});
}
When passing transaction context to service methods:
// Service accepts optional transaction for composability
async function activateLicense(
licenseId: string,
tx?: typeof db | Parameters<Parameters<typeof db.transaction>[0]>[0]
) {
const queryRunner = tx || db;
return queryRunner
.update(licenses)
.set({ status: 'active', activatedAt: new Date() })
.where(eq(licenses.id, licenseId))
.returning();
}
Migrations with drizzle-kit
The drizzle-kit CLI generates migrations from schema changes:
// drizzle.config.ts
import { defineConfig } from 'drizzle-kit';
export default defineConfig({
schema: './src/schema/index.ts',
out: './src/migrations',
dialect: 'postgresql',
dbCredentials: {
url: process.env.DATABASE_URL!,
},
verbose: true,
strict: true,
});
Workflow:
# Generate migration SQL from schema changes
npx drizzle-kit generate
# Apply migrations to database
npx drizzle-kit migrate
# Push schema directly (dev only — skips migration files)
npx drizzle-kit push
# Open Drizzle Studio (visual database browser)
npx drizzle-kit studio
For production, always use migrate (not push). The push command is for local development iteration — it doesn't create migration files, so changes aren't tracked.
Pagination Pattern
Efficient pagination requires both a data query and a count query:
interface PaginationParams {
page: number;
pageSize: number;
}
async function getContactsPaginated(orgId: string, { page, pageSize }: PaginationParams) {
const offset = (page - 1) * pageSize;
const [data, [{ total }]] = await Promise.all([
db
.select()
.from(contacts)
.where(eq(contacts.organizationId, orgId))
.limit(pageSize)
.offset(offset)
.orderBy(contacts.createdAt),
db
.select({ total: count() })
.from(contacts)
.where(eq(contacts.organizationId, orgId)),
]);
return {
data,
pagination: {
page,
pageSize,
total,
totalPages: Math.ceil(total / pageSize),
},
};
}
Running both queries in parallel with Promise.all halves the response time compared to sequential execution.
Common Pitfalls and Solutions
Pitfall 1: Using sql.raw() for dynamic queries
// Dangerous — SQL injection vulnerability
const results = await db.execute(
sql.raw(`SELECT * FROM contacts WHERE name = '${userInput}'`)
);
// Safe — parameterized
const results = await db.execute(
sql`SELECT * FROM contacts WHERE name = ${userInput}`
);
Pitfall 2: Missing indexes on frequently queried columns
Always add indexes to columns you filter by. The most common missed indexes:
(table) => ({
organizationIdx: index('contacts_org_idx').on(table.organizationId),
emailIdx: index('contacts_email_idx').on(table.email),
compositeIdx: index('contacts_org_active_idx').on(
table.organizationId,
table.isActive
),
})
Pitfall 3: N+1 queries from manual relation loading
// N+1 problem — one query per order
const orders = await db.select().from(ordersTable);
for (const order of orders) {
order.contact = await db
.select()
.from(contacts)
.where(eq(contacts.id, order.contactId));
}
// Solution — use Drizzle's relational query API
const orders = await db.query.orders.findMany({
with: { contact: true },
});
Pitfall 4: Forgetting to export new tables from schema/index.ts
If you create a new schema file but forget to re-export from schema/index.ts, drizzle-kit won't see the new table and migrations won't be generated.
Frequently Asked Questions
How does Drizzle compare to Prisma for TypeScript projects?
Drizzle is closer to raw SQL with TypeScript types — you write queries that map 1:1 to SQL. Prisma abstracts more, with a custom query language and a separate schema file. Drizzle has better TypeScript inference in complex queries, smaller runtime overhead, and doesn't require a code generation step. Prisma has a larger ecosystem and more built-in features like soft deletes and audit logging. For large teams comfortable with SQL, Drizzle is generally preferred.
Should I use drizzle-kit push or migrate in CI/CD?
Always use drizzle-kit migrate in CI/CD. The push command is for rapid local development — it applies schema changes directly without creating migration files, so there's no audit trail and rollback is difficult. In your deployment pipeline, run drizzle-kit migrate before starting the application to ensure the schema is in sync.
How do I handle database seeding with Drizzle?
Create a separate seed file and import your schema directly. Use onConflictDoNothing() for idempotent seeds that won't fail on re-run. Store seed data in constants for easy updates, and run seeds after migrations in your dev setup script.
How do I perform full-text search with Drizzle and PostgreSQL?
Use the sql template literal with PostgreSQL's to_tsvector and to_tsquery functions. Drizzle doesn't abstract full-text search, but you can write it directly. Add a GIN index on the tsvector column for performance, and use websearch_to_tsquery for user-provided search terms to safely parse natural language queries.
How do I handle schema changes without downtime?
Use additive migrations — add columns as nullable first, backfill data, then add the NOT NULL constraint. Never drop columns in the same migration that removes them from code. The sequence is: 1) Deploy code that ignores the old column, 2) Deploy migration to drop it. This ensures rollback is always possible without data loss.
Next Steps
A well-designed database layer is the foundation every production application is built on. ECOSIRE's engineering team works with Drizzle ORM and PostgreSQL daily, managing 65+ schema files across a complex multi-tenant system.
Whether you need database architecture consulting, Odoo ERP integration, or a complete backend system built with modern TypeScript tooling, explore our services to learn how we can help.
Written by
ECOSIRE TeamTechnical Writing
The ECOSIRE technical writing team covers Odoo ERP, Shopify eCommerce, AI agents, Power BI analytics, GoHighLevel automation, and enterprise software best practices. Our guides help businesses make informed technology decisions.
ECOSIRE
Grow Your Business with ECOSIRE
Enterprise solutions across ERP, eCommerce, AI, analytics, and automation.
Related Articles
Zero-Downtime Database Migrations with Drizzle ORM
Run database migrations without downtime using Drizzle ORM. Covers expand-contract pattern, backward-compatible schema changes, rollback strategies, and CI/CD integration for PostgreSQL.
Odoo Performance Tuning: PostgreSQL and Server Optimization
Expert guide to Odoo 19 performance tuning. Covers PostgreSQL configuration, indexing, query optimization, Nginx caching, and server sizing for enterprise deployments.
Natural Language Database Queries with OpenClaw
How OpenClaw enables natural language database queries, translating plain English business questions into accurate SQL without exposing database credentials or query complexity.