Aller au contenu principal

Production Build Guide for Angular Chat Application

Overview

This guide covers the complete process of preparing, building, and deploying your Angular chat application with LLM integration to production. It includes optimization strategies, security considerations, and deployment options.

Quick Start Production Build

Basic Production Build

# Install dependencies
npm install

# Build for production
ng build --configuration production

# The built files will be in the dist/ folder

Advanced Production Build with Optimizations

# Build with all optimizations enabled
ng build --configuration production --aot --build-optimizer --optimization --source-map=false

Pre-Production Checklist

1. Code Quality & Performance

  • Remove all console.log statements from production code
  • Remove unused imports and dependencies
  • Optimize bundle size with tree shaking
  • Enable Ahead-of-Time (AOT) compilation
  • Minimize and compress assets
  • Enable gzip compression

2. Security Hardening

  • Implement Content Security Policy (CSP)
  • Secure API key handling
  • Enable HTTPS only
  • Implement proper CORS policies
  • Sanitize user inputs
  • Remove development-only code

3. Environment Configuration

  • Set up production environment variables
  • Configure production API endpoints
  • Set up error tracking (Sentry, LogRocket)
  • Configure analytics (Google Analytics, etc.)
  • Set up monitoring and health checks

🔧 Angular Configuration for Production

1. Update angular.json for Production Optimization

{
"projects": {
"your-app": {
"architect": {
"build": {
"configurations": {
"production": {
"budgets": [
{
"type": "initial",
"maximumWarning": "2mb",
"maximumError": "5mb"
},
{
"type": "anyComponentStyle",
"maximumWarning": "6kb",
"maximumError": "10kb"
}
],
"outputHashing": "all",
"optimization": true,
"sourceMap": false,
"namedChunks": false,
"extractLicenses": true,
"vendorChunk": false,
"buildOptimizer": true,
"aot": true,
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.prod.ts"
}
]
}
}
}
}
}
}
}

2. Create Production Environment File

Create src/environments/environment.prod.ts:

export const environment = {
production: true,
apiUrl: 'https://your-production-api.com',
enableAnalytics: true,
enableErrorTracking: true,
logLevel: 'error', // Only log errors in production
version: '1.0.0'
};

3. Update Main Environment File

Update src/environments/environment.ts:

export const environment = {
production: false,
apiUrl: 'http://localhost:3000',
enableAnalytics: false,
enableErrorTracking: false,
logLevel: 'debug',
version: '1.0.0-dev'
};

Security Considerations for LLM Integration

1. API Key Security

❌ Never do this in production:

// DON'T: Hardcode API keys in frontend
const apiKey = 'your-actual-api-key-here';

✅ Recommended approaches:

// Frontend calls your backend, backend calls LLM APIs
private callLLMAPI(message: string): Observable<string> {
return this.http.post<{response: string}>('/api/chat', {
message: message,
model: this.selectedModel
}).pipe(
map(response => response.response)
);
}

Option B: Environment-based Configuration

// Use environment variables (still visible to users, but better)
private callGoogleGeminiAPI(userMessage: string): Observable<string> {
const apiKey = environment.geminiApiKey; // Set via build process
// ... rest of implementation
}

2. Content Security Policy (CSP)

Add to your index.html:

<meta http-equiv="Content-Security-Policy" content="
default-src 'self';
script-src 'self' 'unsafe-inline';
style-src 'self' 'unsafe-inline';
connect-src 'self'
https://generativelanguage.googleapis.com
https://api.openai.com
https://api.anthropic.com;
img-src 'self' data: https:;
font-src 'self' data:;
">

3. Input Sanitization

import { DomSanitizer } from '@angular/platform-browser';

// Sanitize user inputs before sending to LLM
sanitizeInput(input: string): string {
return this.sanitizer.sanitize(SecurityContext.HTML, input) || '';
}

Build Optimization Strategies

1. Lazy Loading Implementation

// app-routing.module.ts
const routes: Routes = [
{
path: 'chat',
loadChildren: () => import('./chat/chat.module').then(m => m.ChatModule)
},
{
path: 'settings',
loadChildren: () => import('./settings/settings.module').then(m => m.SettingsModule)
}
];

2. Bundle Analysis

# Install bundle analyzer
npm install --save-dev webpack-bundle-analyzer

# Build with stats
ng build --configuration production --stats-json

# Analyze bundle
npx webpack-bundle-analyzer dist/your-app/stats.json

3. Service Worker for Caching

# Add service worker
ng add @angular/pwa

# This will add caching for your app shell and static assets

4. Optimize Images and Assets

# Install image optimization tools
npm install --save-dev imagemin imagemin-webp imagemin-mozjpeg imagemin-pngquant

# Create optimization script in package.json
"scripts": {
"optimize-images": "imagemin src/assets/images/* --out-dir=dist/assets/images --plugin=webp --plugin=mozjpeg --plugin=pngquant"
}

Deployment Options

Netlify Deployment

# Build for production
npm run build

# Install Netlify CLI
npm install -g netlify-cli

# Deploy
netlify deploy --prod --dir=dist/your-app

netlify.toml configuration:

[build]
publish = "dist/your-app"
command = "npm run build"

[[redirects]]
from = "/*"
to = "/index.html"
status = 200

[[headers]]
for = "/*"
[headers.values]
X-Frame-Options = "DENY"
X-XSS-Protection = "1; mode=block"
X-Content-Type-Options = "nosniff"
Referrer-Policy = "strict-origin-when-cross-origin"

Vercel Deployment

# Install Vercel CLI
npm install -g vercel

# Deploy
vercel --prod

vercel.json configuration:

{
"version": 2,
"builds": [
{
"src": "package.json",
"use": "@vercel/static-build",
"config": {
"distDir": "dist/your-app"
}
}
],
"routes": [
{
"src": "/(.*)",
"dest": "/index.html"
}
],
"headers": [
{
"source": "/(.*)",
"headers": [
{
"key": "X-Frame-Options",
"value": "DENY"
},
{
"key": "X-Content-Type-Options",
"value": "nosniff"
}
]
}
]
}

Option 2: Docker Containerization

Dockerfile:

# Multi-stage build
FROM node:18-alpine AS build

WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

COPY . .
RUN npm run build

# Production stage
FROM nginx:alpine

# Copy built app
COPY --from=build /app/dist/your-app /usr/share/nginx/html

# Copy nginx configuration
COPY nginx.conf /etc/nginx/nginx.conf

EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

nginx.conf:

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;

# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;

# Handle Angular routing
location / {
try_files $uri $uri/ /index.html;
}

# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}
}

Build and deploy:

# Build Docker image
docker build -t your-chat-app .

# Run locally
docker run -p 8080:80 your-chat-app

# Deploy to cloud (example with Google Cloud Run)
gcloud run deploy your-chat-app --image your-chat-app --platform managed

Option 3: AWS S3 + CloudFront

# Build for production
npm run build

# Install AWS CLI and configure
aws configure

# Sync to S3
aws s3 sync dist/your-app/ s3://your-bucket-name --delete

# Invalidate CloudFront cache
aws cloudfront create-invalidation --distribution-id YOUR_DISTRIBUTION_ID --paths "/*"

🔍 Monitoring & Analytics

1. Error Tracking with Sentry

npm install @sentry/angular @sentry/tracing
// app.module.ts
import * as Sentry from "@sentry/angular";
import { Integrations } from "@sentry/tracing";

Sentry.init({
dsn: "YOUR_SENTRY_DSN",
integrations: [
new Integrations.BrowserTracing({
tracingOrigins: ["localhost", "your-domain.com", /^\//],
routingInstrumentation: Sentry.routingInstrumentation,
}),
],
tracesSampleRate: 1.0,
});

2. Performance Monitoring

// performance.service.ts
@Injectable()
export class PerformanceService {
trackLLMResponse(provider: string, responseTime: number) {
if (environment.enableAnalytics) {
// Track to your analytics service
gtag('event', 'llm_response', {
provider: provider,
response_time: responseTime
});
}
}
}

3. Health Check Endpoint

// health.service.ts
@Injectable()
export class HealthService {
async checkHealth(): Promise<HealthStatus> {
return {
status: 'healthy',
timestamp: new Date().toISOString(),
version: environment.version,
llmProviders: await this.checkLLMProviders()
};
}
}

Performance Optimization

1. Implement OnPush Change Detection

@Component({
selector: 'app-message-list',
changeDetection: ChangeDetectionStrategy.OnPush,
// ...
})
export class MessageListComponent {
// Component implementation
}

2. Virtual Scrolling for Large Message Lists

// Install CDK
npm install @angular/cdk

// Implement virtual scrolling
<cdk-virtual-scroll-viewport itemSize="50" class="message-viewport">
<div *cdkVirtualFor="let message of messages">
<app-message-bubble [message]="message"></app-message-bubble>
</div>
</cdk-virtual-scroll-viewport>

3. Implement Proper Caching

@Injectable()
export class CacheService {
private cache = new Map<string, any>();

get(key: string): any {
return this.cache.get(key);
}

set(key: string, value: any, ttl: number = 300000): void {
this.cache.set(key, value);
setTimeout(() => this.cache.delete(key), ttl);
}
}

Common Production Issues & Solutions

Issue 1: CORS Errors with LLM APIs

Solution: Implement backend proxy or use CORS-enabled endpoints

Issue 2: Large Bundle Size

Solution: Implement lazy loading and tree shaking

Issue 3: API Key Exposure

Solution: Move API calls to backend or use environment variables

Issue 4: Memory Leaks

Solution: Proper subscription management with takeUntil pattern

Issue 5: Slow Initial Load

Solution: Implement service worker and optimize critical rendering path

Production Deployment Checklist

Pre-Deployment

  • Run production build locally and test
  • Check bundle size and performance
  • Verify all environment variables are set
  • Test error scenarios and fallbacks
  • Validate security headers and CSP
  • Run accessibility audit
  • Test on multiple devices and browsers

Deployment

  • Deploy to staging environment first
  • Run smoke tests on staging
  • Monitor error rates and performance
  • Deploy to production
  • Verify deployment health
  • Monitor initial traffic and errors

Post-Deployment

  • Set up monitoring alerts
  • Configure backup and recovery procedures
  • Document rollback procedures
  • Set up automated health checks
  • Monitor user feedback and analytics

CI/CD Pipeline Example

GitHub Actions workflow (.github/workflows/deploy.yml):

name: Deploy to Production

on:
push:
branches: [main]

jobs:
build-and-deploy:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3

- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
cache: 'npm'

- name: Install dependencies
run: npm ci

- name: Run tests
run: npm run test:ci

- name: Build for production
run: npm run build
env:
NODE_ENV: production

- name: Deploy to Netlify
uses: nwtgck/actions-netlify@v2.0
with:
publish-dir: './dist/your-app'
production-branch: main
github-token: ${{ secrets.GITHUB_TOKEN }}
deploy-message: "Deploy from GitHub Actions"
env:
NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
NETLIFY_SITE_ID: ${{ secrets.NETLIFY_SITE_ID }}

Conclusion

Building for production requires careful attention to security, performance, and reliability. The key considerations for your LLM-integrated chat application are:

  1. Security First: Protect API keys and implement proper authentication
  2. Performance: Optimize bundle size and implement caching
  3. Reliability: Handle errors gracefully and provide fallbacks
  4. Monitoring: Track performance and errors in real-time
  5. Scalability: Design for growth and increased usage

Choose the deployment option that best fits your needs, infrastructure, and budget. Start with static hosting for simplicity, then move to containerized solutions as your requirements grow.

Remember to test thoroughly in a staging environment that mirrors production before deploying to live users.