All work

2025 · Case study · Phill Morgan

Distributed E-Commerce Microservices Engine

An event-driven microservices architecture for e-commerce: ten services, RabbitMQ message bus, MongoDB replica set, Node.js cluster scaling.

Available for portfolio review on request.

Stack

  • Node.js
  • TypeScript
  • Express
  • MongoDB
  • RabbitMQ
  • Next.js 14
  • React
  • Tailwind CSS
  • NextAuth
  • Docker

Role & Scope

Solo build (reference architecture). Ten independent services, RabbitMQ message bus, MongoDB replica set, Node.js cluster-based scaling, Docker Compose orchestration.

Overview

A reference architecture for distributed e-commerce. The platform decomposes into ten independent services, each owning its own data and talking asynchronously over RabbitMQ. The goal was to prove the patterns: service isolation, message-driven coordination, horizontal scaling. Full business logic can be layered on top later.

A Next.js 14 storefront sits on top, talking to backend services through the same message bus.

Key decisions

RabbitMQ request-reply over direct HTTP between services. Microservice architectures have two common shapes: HTTP calls between services (simpler, familiar, tightly coupled) and a message bus with asynchronous request-reply patterns (more moving parts, looser coupling). I chose the bus. With HTTP, the product service depends on the auth service being reachable right now; a transient auth outage cascades into every product page failing. With RabbitMQ correlation IDs, the product service publishes a message and waits for a correlated response: the auth service can be restarted, scaled, redeployed, and the product service sees latency, not errors. The trade was operational complexity: a broker to run, a shared utility library to maintain, correlation-ID plumbing in every service. Service isolation is the whole point of the architecture, so that operational cost is the shape of the product.

Node cluster module over container orchestration for this scale. A Kubernetes-shaped instinct would have said “one service per pod, horizontal pod autoscaler, service mesh.” I used Node’s built-in cluster module instead: primary process forks one worker per CPU core, monitors worker health, respawns failures with rate-limiting to prevent crash loops. The reference-architecture scope is ten services, not a hundred, and the workload is request-reply, not long-running computation. Cluster gives horizontal scaling within a single host with zero external dependencies; a Kubernetes deployment would have meant operating Kubernetes. The trade is that horizontal scaling across hosts needs an explicit orchestration layer added later; the pattern is ready for it (services are stateless, all state in MongoDB and RabbitMQ) but I haven’t layered it on yet.

MongoDB replica set with per-service databases over a shared database. The biggest anti-pattern in distributed systems is services sharing a database: it looks like isolation but creates invisible coupling through schema. Each service here owns its own database within a three-node MongoDB replica set. The replica set gives read redundancy and automatic failover without extra infrastructure; per-service databases mean the auth service can change its user schema without breaking the catalogue service. The trade is that any cross-service query (give me all orders by this user) has to be explicitly orchestrated via the message bus rather than a join. That’s on purpose. The constraint forces the architecture to stay clean.

Service Architecture

Ten services run independently, each with its own process, database connection, and queue consumer:

  • Authentication: registration and login, bcrypt hashing, session-based auth via NextAuth
  • Product: catalogue management with relationships to attributes and categories
  • Category: hierarchical category trees
  • Attribute: product attribute definitions and groups
  • Option: option groups and selectable values for variant generation
  • Cart: add, remove, update operations
  • Filter: dynamic filtering on attributes and categories
  • Article: editorial content and blog posts with metadata
  • SEO: metadata for products, categories, and content pages
  • Seeder: bootstraps dev environments with consistent sample data across every collection

Message-Driven Communication

All inter-service traffic runs through RabbitMQ using a request-reply pattern with correlation IDs. When the frontend needs to authenticate a user, it publishes to the auth queue and waits for a correlated response. No service ever calls another directly over HTTP.

The pattern lives in a shared utility library handling connections, serialisation, and response routing. Each service consumes from its own named queue, processes the request, and publishes the response back to the caller’s reply queue.

Services can be deployed, restarted, and scaled independently without affecting anything else. If the SEO service goes down, browsing keeps working.

Cluster-Based Scaling

Every service uses the Node.js cluster module in a primary/worker pattern. The primary forks one worker per CPU core, watches worker health, and respawns failures with rate limiting: max five restart attempts inside a sixty-second window, which stops a runaway crash loop dead.

Worker count is environment-driven, so service resource allocation can be tuned to traffic patterns.

Shared Type Library

A shared internal package defines TypeScript interfaces for every domain entity: users, products, articles, categories, attributes, carts, filters, options, SEO metadata. It also exports utilities for MongoDB connections, RabbitMQ channel management, correlated message sending, and UUID generation.

Every service pulls this package in, which means message contracts are checked at compile time rather than blowing up at runtime.

Data Layer

Each service connects to its own database inside a three-node MongoDB replica set, so data sovereignty is enforced at the service boundary. The replica set gives read redundancy and automatic failover without extra work.

Cross-service bootstrapping happens through the seeder service, which populates every collection with matched sample data: articles, categories, attribute groups, and the relationships between them.

Infrastructure

Docker Compose orchestrates the full stack: ten service containers, a three-node MongoDB replica set, a RabbitMQ instance. Each service builds from its own Dockerfile, with environment variables controlling database names, queue connections, and cluster worker counts.

Frontend

The Next.js 14 storefront connects directly to RabbitMQ from its API routes, publishing messages to service queues and returning responses to the browser. Authentication uses NextAuth with a credentials provider that delegates to the auth service over the message bus. A catch-all slug route handles product, category, and article pages dynamically.