When we first introduced the high-level architecture of WorkHub.so, we focused on how we turned a scrappy M1 Mac Mini into a global platform. But there’s more to the story than just an HVAC closet and Docker containers. This article dives deeper into the lower-level details of our tech stack—how it’s structured, why we made certain decisions, and what keeps everything running smoothly.
If you haven’t read the main article yet, check out A Global Platform Served From an HVAC Closet for a broader overview before diving into this detailed breakdown. Let’s get into the nuts and bolts.
Development Environment
Our development environment is designed to mimic production as closely as possible, ensuring smooth transitions from local development to live deployment. Here’s how we’ve set it up:
Monorepo Setup
We use a Yarn workspace monorepo to organize our codebase into three main packages:
- Frontend: A React Remix application.
- Backend: A Node.js API.
- Blog: Powered by Astro for content publishing.
packages
├── frontend
│ ├── src
│ └── ...
├── backend
│ ├── src
│ └── ...
├── blog
│ ├── src
│ └── ...
This structure allows us to share dependencies, streamline builds, and maintain consistent tooling across all packages.
Frontend Environment
The frontend is a React Remix application powered by Vite, with some key features:
- Styling: Tailwind CSS for utility-first design.
- Authentication: Supabase for user authentication.
- Mapping: Google Maps for geographic data visualization.
- TypeScript: Ensures type safety across the codebase.
Production Workflow
The frontend is built into a Docker container, which when ran boots up a PM2 server. PM2 is a process manager that allows us to serve multiple instances of the app. An instance of the app is served using a simple remix serve
, providing efficient SSR (server-side rendering).
# Frontend Dockerfile
FROM node:22-alpine AS builder
WORKDIR /app
# Install Yarn 4
RUN corepack enable
RUN corepack prepare yarn@stable --activate
# Environment setup
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
ENV NODE_OPTIONS="--max-old-space-size=8192"
# Force using WASM version of rollup
ENV ROLLUP_NATIVE_RUNTIME=wasm
# Copy package files
COPY package.json yarn.lock ./
COPY packages/frontend/package.json ./packages/frontend/
COPY packages/blog/package.json ./packages/blog/
RUN yarn install
# Copy source files
COPY . .
WORKDIR /app/packages/frontend
RUN yarn build
# Setup final configuration
WORKDIR /app/packages/frontend
RUN mkdir -p /app/logs && \
chmod +x ./scripts/start_server.sh
EXPOSE 4001
ENV PORT=4001
# start PM2 server
ENTRYPOINT ["./scripts/start_server.sh"]
// pm2 config
{
"apps": [
{
"name": "frontend-server",
"script": "yarn remix-serve ./build/server/index.js",
"log_file": "./logs/server.log",
"node_args": "--enable-source-maps",
"time": true,
"max_memory_restart": "300M",
"instances": 2,
"max_restarts": 10
}
]
}
Blog Environment
The blog package uses Astro v5, seamlessly integrating with our frontend.
Astro + Tailwind CSS
Astro shares the same Tailwind CSS theme as our frontend, ensuring consistent design across all user-facing components.
Build Process
Content is compiled into static files and placed in the frontend’s /public
folder:
astro build && cp -r public/* ../frontend/public/lab/
This setup allows the Remix server to serve blog content efficiently.
MDX for Content
We use @astrojs/mdx
to write content in MDX, combining markdown simplicity with the flexibility of being able to also use React components within the markdown.
Backend Environment
The backend is a TypeScript Node.js API, also powered by Vite. Key features include:
- ESM Modules: Modern syntax matching the frontend.
- Auto Reloading: Vite ensures seamless development with hot reloading.
- Caching: Redis reduces API response times to ~2ms for frequent requests.
- Image Optimization: The sharp library converts uploaded images to
.webp
format and creates different resolutions for thumbnails, previews, and full resolution versions.
Development Workflow
A PostGIS Docker container replicates our production database locally. This ensures realistic testing without affecting live data.
// packages/backend/package.json
{
"scripts": {
"docker:up": "docker-compose -f ../../docker-compose.yml up postgres -d",
"dev": "yarn docker:up ; node --inspect --trace-warnings --enable-source-maps ../../node_modules/.bin/vite dev"
}
}
Production Workflow
Like the frontend, the backend is built into a Docker container and is managed by PM2 for scaling and reliability.
# Backend Dockerfile
# Build stage
FROM node:22-alpine AS builder
WORKDIR /app
# Install Yarn 4
RUN corepack enable
RUN corepack prepare yarn@stable --activate
# Set NODE_ENV for the entire build process
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
COPY package.json yarn.lock* ./
COPY packages/backend ./packages/backend/
COPY --from=env_folder .env ./packages/backend/
WORKDIR /app/packages/backend
RUN yarn install;
RUN yarn build;
# Set up directories and permissions
RUN mkdir -p /app/logs
# Make the start_server script executable
RUN chmod +x ./scripts/start_server.sh
EXPOSE 4002
ENV PORT 4002
# starts PM2 server
ENTRYPOINT ["./scripts/start_server.sh"]
// backend pm2 config
{
apps: [
{
name: 'node-server',
script: './dist/server.prod.js',
log_file: './logs/server.log',
node_args: '--enable-source-maps',
time: true,
max_memory_restart: '300M',
instances: 2,
max_restarts: 10,
},
],
};
Our PostGIS ARM64 Dockerfile
This custom build optimizes PostGIS for the M1 ARM architecture, significantly reducing CPU and memory usage.
# The official PostgreSQL image for ARM64 architecture
FROM arm64v8/postgres:16.3
# Set non-interactive mode for apt-get to avoid prompts
RUN export DEBIAN_FRONTEND=noninteractive \
# Update package list
&& apt-get update \
# Install PostGIS
&& apt-get install -y --no-install-recommends \
postgresql-16-postgis-3 \
postgresql-16-postgis-3-scripts \
postgresql-contrib \
postgis \
# Clean up apt cache to reduce image size
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Set environment variables for PostgreSQL logging
ENV POSTGRES_LOGGING=true \
POSTGRES_LOG_DIRECTORY=/var/log/postgresql
# Define mount points for PostgreSQL data and log directories
VOLUME ["/var/lib/postgresql/data", "/var/log/postgresql"]
# Expose the PostgreSQL port
EXPOSE ${POSTGRES_PORT}
Our CI/CD Github Action
The full context for this Github Action can be found here. In essence, this is our streamlined build and deploy process using GitHub Actions, optimized for efficiency and speed.
# .github/workflows/build_and_deploy.yml
name: Build, Publish and Deploy Docker Images
on:
pull_request:
types: [closed]
branches: [main]
paths:
- "packages/backend/**"
- "packages/frontend/**"
permissions:
packages: write
pull-requests: write
contents: write
jobs:
changes:
runs-on: ubuntu-latest
outputs:
backend: ${{ steps.check-changes.outputs.backend }}
frontend: ${{ steps.check-changes.outputs.frontend }}
steps:
# Checkout the repository
- uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
fetch-depth: 2
# Check for changes in specific directories
- name: Check for changes
id: check-changes
run: |
git diff --name-only HEAD^ HEAD > changes.txt
echo "backend=$(grep -q 'packages/backend/' changes.txt && echo 'true' || echo 'false')" >> $GITHUB_OUTPUT
echo "frontend=$(grep -q 'packages/frontend/' changes.txt && echo 'true' || echo 'false')" >> $GITHUB_OUTPUT
update-versions:
needs: changes
runs-on: ubuntu-latest
outputs:
backend_version: ${{ steps.bump-versions.outputs.backend_version }}
frontend_version: ${{ steps.bump-versions.outputs.frontend_version }}
steps:
# Checkout repository to bump versions if necessary
- name: Checkout repository
uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
# Setup Node.js environment again for this job step
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "22"
# Bump versions of changed packages and commit the changes back to the repository.
- name: Bump versions
id: bump-versions
run: |
if [ "${{ needs.changes.outputs.backend }}" = "true" ]; then
cd packages/backend
current_version=$(jq -r '.version' package.json)
new_version=$(echo $current_version | awk -F. '{$NF = $NF + 1;} 1' | sed 's/ /./g')
jq ".version = \"$new_version\"" package.json > tmp.$$.json && mv tmp.$$.json package.json
echo "backend_version=$new_version" >> $GITHUB_OUTPUT
cd ../..
fi
if [ "${{ needs.changes.outputs.frontend }}" = "true" ]; then
cd packages/frontend
current_version=$(jq -r '.version' package.json)
new_version=$(echo $current_version | awk -F. '{$NF = $NF + 1;} 1' | sed 's/ /./g')
jq ".version = \"$new_version\"" package.json > tmp.$$.json && mv tmp.$$.json package.json
echo "frontend_version=$new_version" >> $GITHUB_OUTPUT
cd ../..
fi
build:
runs-on: ubuntu-latest
needs: [changes, update-versions]
env:
DOCKER_BUILDKIT: 1
strategy:
matrix:
service: [backend, frontend]
steps:
# Checkout the repository if there are changes to the service being built.
- uses: actions/checkout@v4
if: ${{ needs.changes.outputs[matrix.service] == 'true' }}
with:
token: ${{ secrets.GITHUB_TOKEN }}
# Set up Docker Buildx for building multi-platform images.
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
# Log in to GitHub Container Registry to push images.
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Build and Push frontend/backend Docker images
- name: Build and Push Docker image
if: needs.changes.outputs[matrix.service] == 'true'
uses: docker/build-push-action@v5
with:
context: .
file: ./Dockerfile.${{ matrix.service }}
platforms: linux/arm64
push: true
tags: |
ghcr.io/${{ github.repository }}/${{ matrix.service }}:${{ matrix.service == 'backend' && needs.update-versions.outputs.backend_version || needs.update-versions.outputs.frontend_version }}
ghcr.io/${{ github.repository }}/${{ matrix.service }}:latest
cache-from: type=gha
cache-to: type=gha,mode=max
build-contexts: env_folder=./env
deploy:
runs-on: ubuntu-latest
needs: [changes, build]
steps:
- uses: actions/checkout@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
- name: Pull New Images and Restart
uses: appleboy/ssh-[email protected]
env:
GR_ACCESS_TOKEN: ${{ secrets.GR_ACCESS_TOKEN }}
DOCKER_USERNAME: ${{ github.actor }}
KEYCHAIN_USERNAME: ${{ secrets.USERNAME }}
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
port: ${{ secrets.PORT }}
script: |
echo "Pulling new images and restarting containers"
# This script is ran from the M1 Mini to perform backups, pull the latest images, and deploy the new containers.
# It is not something that should be shared with the public, but this example would get you to the point
# of running commands from your own server.
Conclusionary Remarks
By combining these tools and practices, we’ve created a development environment that empowers rapid iteration without sacrificing stability.
For more insights into our architecture and infrastructure decisions, revisit A Global Platform Served From an HVAC Closet.