Deployment Guide
Learn how to deploy your Azura API to various environments.
This guide covers how to deploy your Azura API to different environments, including traditional servers, containerized environments, and serverless platforms.
Preparing for Deployment
Before deploying your Azura API, you should prepare your application:
1. Set up environment variables
Use environment variables for configuration instead of hardcoding values:
// src/index.ts
import { AzuraServer } from '@atosjs/azura';
const app = new AzuraServer({
port: process.env.PORT ? parseInt(process.env.PORT) : 3000,
host: process.env.HOST || 'localhost'
});
// Database configuration
const dbConfig = {
host: process.env.DB_HOST || 'localhost',
port: process.env.DB_PORT ? parseInt(process.env.DB_PORT) : 5432,
user: process.env.DB_USER || 'postgres',
password: process.env.DB_PASSWORD || 'password',
database: process.env.DB_NAME || 'azura_db'
};
2. Build your application
Compile your TypeScript code to JavaScript:
# Add build script to package.json
{
"scripts": {
"build": "tsc",
"start": "node dist/index.js"
}
}
# Run the build
npm run build
3. Set up proper error handling
Ensure your application handles errors gracefully:
// Global error handler
app.use((err, req, res, next) => {
console.error('Error:', err);
// Don't expose stack traces in production
const error = process.env.NODE_ENV === 'production'
? { message: 'Internal Server Error' }
: { message: err.message, stack: err.stack };
res.status(500).json(error);
});
4. Set up logging
Configure proper logging for production:
// Configure logging based on environment
if (process.env.NODE_ENV === 'production') {
// Production logging (e.g., to a file or service)
app.registerPlugin(logger, {
level: 'info',
format: 'json',
destination: process.env.LOG_DESTINATION || 'stdout'
});
} else {
// Development logging (more verbose)
app.registerPlugin(logger, {
level: 'debug',
format: 'pretty',
destination: 'stdout'
});
}
Traditional Server Deployment
Deploy your Azura API to a traditional server or VPS:
1. Set up your server
Install Node.js and other dependencies on your server:
# Update package lists
sudo apt update
# Install Node.js and npm
sudo apt install -y nodejs npm
# Install PM2 globally (process manager)
sudo npm install -g pm2
2. Transfer your code
Transfer your code to the server using SCP, Git, or another method:
# Using SCP
scp -r ./dist package.json package-lock.json user@your-server:/path/to/app
# Or clone from Git
git clone https://github.com/your-username/your-repo.git
cd your-repo
npm install --production
3. Set up environment variables
Create a .env file or set environment variables on your server:
# Create .env file
cat > .env << EOL
PORT=3000
NODE_ENV=production
DB_HOST=your-db-host
DB_PORT=5432
DB_USER=your-db-user
DB_PASSWORD=your-db-password
DB_NAME=your-db-name
EOL
# Or set environment variables directly
export PORT=3000
export NODE_ENV=production
# ... other variables
4. Start your application with PM2
Use PM2 to manage your Node.js process:
# Start the application
pm2 start dist/index.js --name "azura-api"
# Ensure PM2 starts on system boot
pm2 startup
pm2 save
# Monitor your application
pm2 monit
# View logs
pm2 logs azura-api
5. Set up a reverse proxy (optional)
Use Nginx or Apache as a reverse proxy:
# Nginx configuration example
server {
listen 80;
server_name your-domain.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Docker Deployment
Deploy your Azura API using Docker:
1. Create a Dockerfile
Create a Dockerfile in your project root:
FROM node:18-alpine
WORKDIR /app
# Copy package files and install dependencies
COPY package*.json ./
RUN npm install --production
# Copy built application
COPY dist/ ./dist/
# Set environment variables
ENV NODE_ENV=production
ENV PORT=3000
# Expose the port
EXPOSE 3000
# Start the application
CMD ["node", "dist/index.js"]
2. Create a .dockerignore file
Create a .dockerignore file to exclude unnecessary files:
node_modules
npm-debug.log
src
.git
.gitignore
.env
*.md
3. Build and run the Docker image
Build and run your Docker container:
# Build the Docker image
docker build -t azura-api .
# Run the container
docker run -p 3000:3000 --env-file .env -d --name azura-api azura-api
# View logs
docker logs -f azura-api
4. Using Docker Compose (optional)
Create a docker-compose.yml file for more complex setups:
version: '3'
services:
api:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- PORT=3000
- DB_HOST=db
- DB_PORT=5432
- DB_USER=postgres
- DB_PASSWORD=password
- DB_NAME=azura_db
depends_on:
- db
restart: always
db:
image: postgres:14
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=azura_db
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
5. Deploy with Docker Compose
Deploy your application using Docker Compose:
# Start the services
docker-compose up -d
# View logs
docker-compose logs -f
# Stop the services
docker-compose down
Kubernetes Deployment
Deploy your Azura API to a Kubernetes cluster:
1. Create Kubernetes deployment file
Create a deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: azura-api
spec:
replicas: 3
selector:
matchLabels:
app: azura-api
template:
metadata:
labels:
app: azura-api
spec:
containers:
- name: azura-api
image: your-registry/azura-api:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: "production"
- name: PORT
value: "3000"
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: azura-config
key: db_host
- name: DB_PORT
valueFrom:
configMapKeyRef:
name: azura-config
key: db_port
- name: DB_USER
valueFrom:
secretKeyRef:
name: azura-secrets
key: db_user
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: azura-secrets
key: db_password
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: azura-config
key: db_name
resources:
limits:
cpu: "500m"
memory: "512Mi"
requests:
cpu: "100m"
memory: "256Mi"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
2. Create Kubernetes service file
Create a service.yaml file:
apiVersion: v1
kind: Service
metadata:
name: azura-api
spec:
selector:
app: azura-api
ports:
- port: 80
targetPort: 3000
type: ClusterIP
3. Create ConfigMap and Secret
Create configuration and secrets:
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: azura-config
data:
db_host: "postgres-service"
db_port: "5432"
db_name: "azura_db"
---
# Secret
apiVersion: v1
kind: Secret
metadata:
name: azura-secrets
type: Opaque
data:
db_user: cG9zdGdyZXM= # base64 encoded "postgres"
db_password: cGFzc3dvcmQ= # base64 encoded "password"
4. Apply Kubernetes configurations
Deploy your application to Kubernetes:
# Apply ConfigMap and Secret
kubectl apply -f configmap.yaml
kubectl apply -f secret.yaml
# Apply Deployment and Service
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
# Check deployment status
kubectl get deployments
kubectl get pods
kubectl get services
5. Set up Ingress (optional)
Create an Ingress resource for external access:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: azura-api-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: api.your-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: azura-api
port:
number: 80
Serverless Deployment
Deploy your Azura API to serverless platforms:
Vercel
Deploy to Vercel:
1. Create a vercel.json file
Create a configuration file for Vercel:
{
"version": 2,
"builds": [
{
"src": "dist/index.js",
"use": "@vercel/node"
}
],
"routes": [
{
"src": "/(.*)",
"dest": "dist/index.js"
}
]
}
2. Deploy to Vercel
Deploy using the Vercel CLI:
# Install Vercel CLI
npm install -g vercel
# Deploy
vercel
# For production deployment
vercel --prod
AWS Lambda
Deploy to AWS Lambda with API Gateway:
1. Install Serverless Framework
Install the Serverless Framework:
npm install -g serverless
2. Create a serverless.yml file
Create a configuration file for Serverless Framework:
service: azura-api
provider:
name: aws
runtime: nodejs18.x
stage: ${opt:stage, 'dev'}
region: ${opt:region, 'us-east-1'}
environment:
NODE_ENV: production
DB_HOST: ${env:DB_HOST}
DB_PORT: ${env:DB_PORT}
DB_USER: ${env:DB_USER}
DB_PASSWORD: ${env:DB_PASSWORD}
DB_NAME: ${env:DB_NAME}
functions:
api:
handler: dist/lambda.handler
events:
- http:
path: /{proxy+}
method: any
cors: true
plugins:
- serverless-offline
3. Create a Lambda handler
Create a Lambda handler file (src/lambda.ts):
// src/lambda.ts
import { AzuraServer } from '@atosjs/azura';
import serverless from 'serverless-http';
// Import your controllers
import { UserController } from './controllers/user.controller';
// Create Azura server
const app = new AzuraServer();
// Register controllers
app.load([UserController]);
// Create serverless handler
export const handler = serverless(app.server);
4. Deploy to AWS
Deploy using the Serverless Framework:
# Deploy to AWS
serverless deploy
# For a specific stage
serverless deploy --stage production
Monitoring and Logging
Set up monitoring and logging for your deployed API:
Application Logging
Configure proper logging in your application:
// Configure logging middleware
app.use((req, res, next) => {
const start = Date.now();
// Add a listener for when the response finishes
res.on('finish', () => {
const duration = Date.now() - start;
const log = {
method: req.method,
path: req.path,
statusCode: res.statusCode,
duration: `${duration}ms`,
userAgent: req.get('user-agent'),
ip: req.ip
};
console.log(JSON.stringify(log));
});
next();
});
Health Check Endpoint
Add a health check endpoint to your API:
// Health check endpoint
app.get('/health', (req, res) => {
const healthcheck = {
uptime: process.uptime(),
message: 'OK',
timestamp: Date.now()
};
try {
// Check database connection
const dbConnected = db.isConnected();
if (!dbConnected) {
healthcheck.message = 'Database connection failed';
return res.status(503).json(healthcheck);
}
res.status(200).json(healthcheck);
} catch (error) {
healthcheck.message = error.message;
res.status(503).json(healthcheck);
}
});
Metrics Endpoint
Add a metrics endpoint for monitoring:
// Metrics endpoint
app.get('/metrics', (req, res) => {
const metrics = {
memory: process.memoryUsage(),
cpu: process.cpuUsage(),
uptime: process.uptime(),
requests: {
total: requestCounter.total,
success: requestCounter.success,
error: requestCounter.error
},
responseTime: {
average: calculateAverageResponseTime(),
p95: calculateP95ResponseTime()
}
};
res.status(200).json(metrics);
});
Third-Party Monitoring
Integrate with third-party monitoring services:
// Example integration with a monitoring service
import { monitor } from 'monitoring-service';
// Initialize monitoring
monitor.init({
apiKey: process.env.MONITORING_API_KEY,
serviceName: 'azura-api',
environment: process.env.NODE_ENV
});
// Track requests
app.use((req, res, next) => {
const transaction = monitor.startTransaction(`${req.method} ${req.path}`);
res.on('finish', () => {
transaction.result = res.statusCode < 400 ? 'success' : 'error';
transaction.end();
});
next();
});
// Track errors
app.use((err, req, res, next) => {
monitor.captureError(err);
next(err);
});
CI/CD Pipeline
Set up a continuous integration and deployment pipeline:
GitHub Actions
Create a GitHub Actions workflow file (.github/workflows/deploy.yml):
name: Deploy Azura API
on:
push:
branches: [ main ]
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Build
run: npm run build
- name: Deploy to production
if: success()
run: |
# Deploy to your server or cloud platform
# Example for Vercel
npm install -g vercel
vercel --prod --token ${{ secrets.VERCEL_TOKEN }}
env:
VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
GitLab CI/CD
Create a GitLab CI/CD configuration file (.gitlab-ci.yml):
stages:
- test
- build
- deploy
variables:
NODE_VERSION: "18"
test:
stage: test
image: node:$NODE_VERSION
script:
- npm ci
- npm test
cache:
paths:
- node_modules/
build:
stage: build
image: node:$NODE_VERSION
script:
- npm ci
- npm run build
artifacts:
paths:
- dist/
cache:
paths:
- node_modules/
deploy:
stage: deploy
image: node:$NODE_VERSION
script:
- npm install -g serverless
- serverless deploy --stage production
environment:
name: production
only:
- main
Conclusion
This guide covered various deployment options for your Azura API. Choose the approach that best fits your project requirements and infrastructure preferences.
- Traditional server deployment is simple but requires more manual maintenance.
- Docker deployment provides consistency across environments and easier scaling.
- Kubernetes deployment offers advanced orchestration for complex applications.
- Serverless deployment reduces infrastructure management but may have limitations for certain use cases.
- Always set up proper monitoring and logging for production deployments.
- Implement CI/CD pipelines to automate testing and deployment processes.