Development Setup

Get started with local development, including environment setup, dependencies, and development workflow.

Prerequisites

  • • Node.js 18+ (LTS recommended)
  • • pnpm 8+ (package manager)
  • • Python 3.11+
  • • Git
  • • VS Code (recommended)
  • • OpenAI API key

Quick Start

# Clone the repository
git clone https://github.com/your-org/VoiceAssist.git
# Install dependencies
pnpm install
# Start development servers
pnpm dev

Project Structure

VoiceAssist/
├── apps/
├── web-app/ # React frontend
└── docs-site/ # Documentation site
├── backend/ # Python FastAPI backend
├── docs/ # Markdown documentation
├── packages/ # Shared packages
└── turbo.json # Turborepo config

VoiceAssist Development Setup Guide

This guide walks you through setting up your local development environment for VoiceAssist.

Table of Contents


Prerequisites

Required Software

  • Git 2.30+
  • Docker 24.0+ and Docker Compose 2.20+
  • Python 3.11+ (for backend development)
  • Node.js 18+ (for frontend development)
  • pnpm 8+ (package manager for frontend monorepo)
  • Visual Studio Code or PyCharm for IDE
  • Postman or Insomnia for API testing
  • pgAdmin or DBeaver for database management

Backend Setup

1. Clone the Repository

git clone git@github.com:mohammednazmy/VoiceAssist.git cd VoiceAssist

2. Set Up Python Virtual Environment

cd services/api-gateway python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate

3. Install Backend Dependencies

pip install -r requirements.txt

4. Configure Environment Variables

# From repo root cp .env.example .env

Edit .env and configure the following required variables:

# Core ENVIRONMENT=development DEBUG=true # Database POSTGRES_HOST=localhost POSTGRES_PORT=5432 POSTGRES_USER=voiceassist POSTGRES_PASSWORD=<your-password> POSTGRES_DB=voiceassist_db DATABASE_URL=postgresql://<user>:<password>@localhost:5432/voiceassist_db # Redis REDIS_HOST=localhost REDIS_PORT=6379 REDIS_PASSWORD=<your-password> # Qdrant (vector database) QDRANT_HOST=localhost QDRANT_PORT=6333 # Security SECRET_KEY=<generate-with-openssl-rand-hex-32> JWT_SECRET=<generate-with-openssl-rand-hex-32> JWT_ALGORITHM=HS256 ACCESS_TOKEN_EXPIRE_MINUTES=30 REFRESH_TOKEN_EXPIRE_DAYS=7 # OpenAI OPENAI_API_KEY=<your-openai-api-key> OPENAI_MODEL=gpt-4-turbo-preview # Nextcloud (optional for development) NEXTCLOUD_URL=http://localhost:8080 NEXTCLOUD_ADMIN_USER=admin NEXTCLOUD_ADMIN_PASSWORD=<your-password>

5. Validate Environment

# From repo root make check-env

This command validates that all required environment variables are set.

6. Start Infrastructure Services

# From repo root docker compose up -d postgres redis qdrant

7. Run Database Migrations

cd services/api-gateway source venv/bin/activate alembic upgrade head

Frontend Setup

1. Install pnpm

If you don't have pnpm installed:

npm install -g pnpm

Or see pnpm installation docs for alternative methods.

2. Install Frontend Dependencies

# From repo root pnpm install

This installs dependencies for all packages in the monorepo (apps and shared packages).

3. Build Shared Packages

pnpm build

This builds all shared packages that other apps depend on.

4. Start Development Server

# Start all apps in dev mode pnpm dev # Or start specific apps cd apps/web-app pnpm dev cd apps/admin-panel pnpm dev

Frontend Project Structure

VoiceAssist/
├── apps/
│   ├── admin-panel/       # Admin dashboard (React + Vite)
│   └── web-app/           # Main web application (React + Vite)
├── packages/
│   ├── api-client/        # API client library
│   ├── config/            # Shared configuration
│   ├── design-tokens/     # Design system tokens
│   ├── types/             # TypeScript types
│   ├── ui/                # Shared UI components
│   └── utils/             # Shared utilities
└── pnpm-workspace.yaml    # pnpm workspace config

Development Tooling

Makefile Targets

The repo includes a Makefile with common development tasks:

# Environment make check-env # Validate environment variables make check-openai # Verify OpenAI API key is valid make install # Install all dependencies # Development make dev # Start all services with Docker Compose make stop # Stop all Docker Compose services make logs # View Docker Compose logs # Testing make test # Run all backend tests make test-unit # Run backend unit tests only make test-frontend # Run frontend tests # Quality Checks make lint # Run Python and frontend linters make type-check # Run Python and TypeScript type checking make bandit # Run Bandit security scanner make security # Run all security scans make pre-commit # Run pre-commit hooks on all files # Cleanup make clean # Remove build artifacts and caches

Pre-commit Hooks

We use pre-commit hooks to enforce code quality standards.

Install pre-commit

cd services/api-gateway source venv/bin/activate pip install pre-commit

Setup hooks

# From repo root pre-commit install

This installs git hooks that run automatically before each commit.

Manual Run

# Run on all files pre-commit run --all-files # Run on staged files only pre-commit run

What Pre-commit Checks

  • Black - Python code formatting
  • isort - Python import sorting
  • flake8 - Python linting
  • mypy - Python type checking (optional)
  • Bandit - Python security scanning
  • Prettier - Frontend code formatting
  • ESLint - TypeScript/JavaScript linting
  • shellcheck - Shell script linting
  • hadolint - Dockerfile linting

Running the Application

# Start all services (backend + frontend + infrastructure) docker compose up -d # View logs docker compose logs -f api-gateway

Access the application:

Backend Only

cd services/api-gateway source venv/bin/activate uvicorn app.main:app --reload --host 0.0.0.0 --port 8000

Frontend Only

# Start specific frontend app cd apps/web-app pnpm dev # Or cd apps/admin-panel pnpm dev

Verifying OpenAI API Key Integration

The application requires a valid OpenAI API key for LLM features (chat, RAG, voice mode).

Quick Verification

# From repo root (uses venv and settings automatically) make check-openai

This runs a verification script that:

  1. Loads the OPENAI_API_KEY from .env via Pydantic Settings
  2. Validates the key format (should start with sk-)
  3. Makes a live API call to OpenAI (lists available models)
  4. Reports success or failure with actionable error messages

Manual Script Usage

# From repo root cd services/api-gateway source venv/bin/activate python ../../scripts/check_openai_key.py # With verbose output python ../../scripts/check_openai_key.py --verbose # Skip live API test (only validate config) python ../../scripts/check_openai_key.py --skip-api-test

Health Check Endpoint

When the backend is running, you can also verify OpenAI connectivity via:

curl http://localhost:8000/health/openai

Returns:

  • 200 OK - Key is valid and API is accessible
  • 503 Service Unavailable - Key not configured or API not accessible

Example response:

{ "status": "ok", "configured": true, "accessible": true, "latency_ms": 245.67, "models_accessible": 108, "timestamp": 1732476123.456 }

Live Integration Tests

For deeper validation, run the live OpenAI integration tests:

cd services/api-gateway source venv/bin/activate export PYTHONPATH=. export LIVE_OPENAI_TESTS=1 pytest tests/integration/test_openai_config.py -v

These tests verify:

  • API key can list models
  • LLM client can generate completions
  • Realtime Voice service is properly configured

Note: Live tests are skipped by default to avoid API costs. Enable with LIVE_OPENAI_TESTS=1.

Troubleshooting

If verification fails:

  1. Key not configured: Check .env file has OPENAI_API_KEY=sk-...
  2. Invalid format: Ensure key starts with sk- and is 40+ characters
  3. API rejected: Verify key at https://platform.openai.com/api-keys
  4. Rate limited: Check your OpenAI account usage and billing
  5. Network error: Verify server can reach api.openai.com

Testing

Backend Tests

# From repo root make test # All tests make test-unit # Unit tests only # Or directly with pytest cd services/api-gateway source venv/bin/activate pytest # All tests pytest tests/unit/ # Unit tests pytest tests/e2e/ # E2E tests pytest -v -k "test_auth" # Specific tests

Frontend Tests

# From repo root pnpm test # Or in specific package cd apps/web-app pnpm test

Test Coverage

# Backend coverage cd services/api-gateway source venv/bin/activate pytest --cov=app --cov-report=html # View coverage report open htmlcov/index.html

Linting and Type Checking

Backend

# From repo root make lint # Flake8 + Black + isort make type-check # mypy type checking # Or manually cd services/api-gateway source venv/bin/activate black app/ isort app/ flake8 app/ mypy app/

Frontend

# From repo root pnpm lint # ESLint pnpm type-check # TypeScript compiler # Or in specific package cd apps/web-app pnpm lint pnpm type-check

Security Scanning

Bandit (Python Security)

# From repo root make bandit # Or directly cd services/api-gateway source venv/bin/activate bandit -c ../../.bandit -r app/

Safety (Python Dependencies)

cd services/api-gateway source venv/bin/activate pip install safety safety check

npm audit (Frontend Dependencies)

pnpm audit

CI/CD

GitHub Actions Workflows

The project uses GitHub Actions for CI/CD:

  • Backend CI (.github/workflows/ci.yml)

    • Runs on changes to services/, tests/, backend configs
    • Linting, type checking, unit tests, E2E tests
    • Security scanning with Bandit
  • Frontend CI (.github/workflows/frontend-ci.yml)

    • Runs on changes to apps/, packages/, frontend configs
    • Linting, type checking, tests
    • Build verification
  • Security Scan (.github/workflows/security-scan.yml)

    • Scheduled security scans
    • Dependency vulnerability checks

Required Checks Before PR

Before opening a pull request, ensure:

# 1. Environment is valid make check-env # 2. All tests pass make test pnpm test # 3. Linting passes make lint pnpm lint # 4. Type checking passes make type-check pnpm type-check # 5. Pre-commit hooks pass pre-commit run --all-files # 6. Security scans pass make bandit

Troubleshooting

Backend Issues

Problem: ModuleNotFoundError: No module named 'app'

Solution: Ensure you're in the correct directory and virtual environment:

cd services/api-gateway source venv/bin/activate export PYTHONPATH=/path/to/VoiceAssist/services/api-gateway:$PYTHONPATH

Problem: Database connection errors

Solution: Ensure infrastructure services are running:

docker compose ps docker compose up -d postgres redis qdrant

Frontend Issues

Problem: pnpm: command not found

Solution: Install pnpm globally:

npm install -g pnpm

Problem: Build errors with shared packages

Solution: Build packages in dependency order:

pnpm build

Problem: Port already in use

Solution: Change port in vite.config.ts or kill the process:

lsof -ti:5173 | xargs kill -9

Pre-commit Issues

Problem: Pre-commit hooks failing

Solution: Update hooks and run manually:

pre-commit autoupdate pre-commit run --all-files

Additional Resources


Getting Help

Contributing Guidelines

Contributing to VoiceAssist

Thank you for your interest in contributing to VoiceAssist! This document provides guidelines and best practices for contributing to the project.

Table of Contents


Code of Conduct

This project adheres to professional standards of conduct. We expect all contributors to:

  • Be respectful and inclusive
  • Focus on constructive feedback
  • Prioritize the best interests of the project and its users
  • Maintain confidentiality regarding security issues

Getting Started

1. Fork and Clone

# Fork the repository on GitHub, then clone your fork git clone git@github.com:YOUR_USERNAME/VoiceAssist.git cd VoiceAssist # Add upstream remote git remote add upstream git@github.com:mohammednazmy/VoiceAssist.git

2. Set Up Development Environment

Follow the complete setup instructions in docs/LOCAL_DEVELOPMENT.md.

Quick setup:

# Backend cd services/api-gateway python3 -m venv venv source venv/bin/activate pip install -r requirements.txt # Frontend cd ../.. pnpm install # Validate environment make check-env

3. Keep Your Fork Updated

git fetch upstream git checkout main git merge upstream/main git push origin main

Development Workflow

Branching Strategy

We use a feature branch workflow:

# Create a feature branch from main git checkout main git pull upstream main git checkout -b feature/your-feature-name # Or for bug fixes git checkout -b fix/bug-description

Branch naming conventions:

  • feature/ - New features
  • fix/ - Bug fixes
  • docs/ - Documentation updates
  • refactor/ - Code refactoring
  • test/ - Test improvements
  • chore/ - Build, tooling, dependencies

Making Changes

  1. Keep changes focused - One feature or fix per branch

  2. Write clear commit messages:

    feat: add user profile page
    
    - Created ProfilePage component
    - Added profile update API endpoint
    - Added unit tests for profile service
    
    Closes #123
    
  3. Commit message format:

    • feat: - New feature
    • fix: - Bug fix
    • docs: - Documentation only
    • style: - Formatting, missing semicolons, etc.
    • refactor: - Code refactoring
    • test: - Adding tests
    • chore: - Build, dependencies, tooling
  4. Commit early and often - Small, logical commits are easier to review

Before Opening a PR

Run these checks locally to ensure CI will pass:

# 1. Validate environment make check-env # 2. Run all tests make test # Backend tests (pytest) pnpm test # Frontend tests (runs Vitest in non-watch mode) # 3. Run linters make lint # Backend (flake8, black, isort) pnpm lint # Frontend (ESLint) # 4. Run type checking make type-check # Backend (mypy) pnpm type-check # Frontend (TypeScript) # 5. Run pre-commit hooks pre-commit run --all-files # 6. Run security scans make bandit

Note on frontend tests:

  • pnpm test - Runs tests once and exits (used in CI)
  • pnpm test:watch - Runs tests in interactive watch mode (for local development)
  • Tests use jsdom 24.1.3 (downgraded from 27.2.0 to fix initialization hang)
  • Some tests have known failures - see KNOWN_ISSUES.md for details

Code Style

Python (Backend)

We follow PEP 8 with these tools enforcing style:

  • Black (code formatter)
  • isort (import sorting)
  • flake8 (linting)
  • mypy (type checking, optional but recommended)

Configuration is in:

  • .flake8
  • pyproject.toml
  • .isort.cfg

Type Hints: Use type hints for all function signatures:

from typing import Optional, List, Dict, Any def get_user(user_id: str) -> Optional[User]: """Get user by ID.""" pass async def list_users( skip: int = 0, limit: int = 100 ) -> List[User]: """List users with pagination.""" pass

Docstrings: Use Google-style docstrings:

def process_query( query: str, context: Optional[Dict[str, Any]] = None ) -> QueryResult: """Process a user query and return results. Args: query: The user's query string context: Optional context dictionary for the query Returns: QueryResult containing the processed response Raises: ValidationError: If query is invalid ProcessingError: If query processing fails """ pass

TypeScript/JavaScript (Frontend)

We use:

  • Prettier (code formatting)
  • ESLint (linting)
  • TypeScript (type safety)

Configuration is in:

  • .prettierrc
  • .eslintrc.js
  • tsconfig.json

Type Safety: Always use TypeScript, avoid any:

// Good interface User { id: string; email: string; role: "user" | "admin"; } function getUser(id: string): Promise<User> { return apiClient.get<User>(`/users/${id}`); } // Avoid function getUser(id: any): Promise<any> { return apiClient.get(`/users/${id}`); }

Component Structure: Use functional components with hooks:

interface ProfilePageProps { userId: string; } export function ProfilePage({ userId }: ProfilePageProps) { const [user, setUser] = useState<User | null>(null); const [loading, setLoading] = useState(true); useEffect(() => { loadUser(); }, [userId]); const loadUser = async () => { // Implementation }; return ( <div> {/* JSX */} </div> ); }

Editor Configuration

We use .editorconfig to maintain consistent formatting across editors:

[*] end_of_line = lf insert_final_newline = true charset = utf-8 indent_style = space indent_size = 2 [*.py] indent_size = 4 [Makefile] indent_style = tab

Ensure your editor respects .editorconfig.


Testing Requirements

Backend Tests

Tests are required for:

  • All new API endpoints
  • Business logic functions
  • Database operations
  • Authentication/authorization flows

Test structure:

services/api-gateway/tests/
├── unit/              # Unit tests (fast, isolated)
│   ├── test_auth.py
│   ├── test_api_envelope.py
│   └── test_rag_service.py
├── e2e/               # End-to-end tests (slower, integrated)
│   ├── test_auth_flow.py
│   └── test_query_flow.py
└── conftest.py        # Shared fixtures

Example test:

import pytest from app.core.security import create_access_token, verify_token def test_create_and_verify_access_token(): """Test JWT token creation and verification.""" payload = {"sub": "user@example.com", "role": "user"} token = create_access_token(payload) assert token is not None assert isinstance(token, str) decoded = verify_token(token) assert decoded["sub"] == "user@example.com" assert decoded["role"] == "user" assert decoded["type"] == "access"

Frontend Tests

Use Vitest for unit tests and React Testing Library for component tests:

import { render, screen, fireEvent, waitFor } from '@testing-library/react'; import { LoginPage } from './LoginPage'; import { describe, it, expect, vi } from 'vitest'; describe('LoginPage', () => { it('should submit login form with valid credentials', async () => { const mockLogin = vi.fn(); render(<LoginPage onLogin={mockLogin} />); fireEvent.change(screen.getByLabelText(/email/i), { target: { value: 'user@example.com' } }); fireEvent.change(screen.getByLabelText(/password/i), { target: { value: 'password123' } }); fireEvent.click(screen.getByRole('button', { name: /sign in/i })); await waitFor(() => { expect(mockLogin).toHaveBeenCalledWith({ email: 'user@example.com', password: 'password123' }); }); }); });

Test Coverage

Aim for:

  • Backend: 80%+ overall, 90%+ for critical paths (auth, data handling)
  • Frontend: 70%+ for shared packages, 60%+ for apps

Check coverage:

# Backend cd services/api-gateway pytest --cov=app --cov-report=html open htmlcov/index.html # Frontend pnpm test:coverage

Pull Request Process

1. Prepare Your PR

  • All tests pass locally
  • All linters pass
  • Type checking passes
  • Pre-commit hooks pass
  • New tests added for new features
  • Documentation updated (if needed)
  • Changelog updated (for user-facing changes)

2. Open the Pull Request

git push origin feature/your-feature-name

Then open a PR on GitHub with:

Title: Clear, concise description (e.g., "feat: add user profile management")

Description template:

## Description Brief description of what this PR does. ## Changes - List of specific changes - Bullet points for clarity ## Testing - [ ] Unit tests added/updated - [ ] E2E tests added/updated (if applicable) - [ ] Manual testing performed ## Screenshots (if UI changes) ![Screenshot](url) ## Related Issues Closes #123 Relates to #456 ## Checklist - [ ] Tests pass - [ ] Linters pass - [ ] Documentation updated - [ ] CHANGELOG updated (if user-facing)

3. Code Review

  • Address all review comments
  • Keep discussions focused and professional
  • Make requested changes in new commits (don't force-push during review)
  • Mark conversations as resolved once addressed

4. Merge

Once approved and all CI checks pass:

  • We typically use squash merging to keep main branch history clean
  • Ensure the squash commit message is clear and follows conventions
  • Delete your branch after merging

Security and Privacy

Secrets Management

NEVER commit:

  • API keys
  • Passwords
  • Private keys
  • SSL certificates
  • .env files with real values

Use .env.example as a template with placeholder values.

Check before commit:

git diff --staged # Review what you're committing

HIPAA and PHI

This is a healthcare application. Be extremely careful with Protected Health Information (PHI):

  • Never log PHI - Don't include patient data in logs
  • Never commit test data with PHI - Use synthetic/fake data only
  • Sanitize examples - Redact any real data in documentation or bug reports
  • Encrypt sensitive data - Follow encryption standards in codebase

Security Issues

DO NOT open public GitHub issues for security vulnerabilities.

Instead:

  1. Email security concerns to: [INSERT EMAIL]
  2. Include detailed description and reproduction steps
  3. We'll respond within 48 hours

Documentation

When to Update Docs

Update documentation when you:

  • Add a new feature
  • Change API contracts
  • Modify configuration requirements
  • Update deployment procedures
  • Change development setup

Documentation Locations

  • API docs: Auto-generated from code (/docs endpoint)
  • Architecture: docs/ARCHITECTURE_V2.md
  • Setup: docs/DEVELOPMENT_SETUP.md
  • Deployment: docs/DEPLOYMENT_GUIDE.md
  • Client implementation: docs/client-implementation/
  • README: README.md (high-level overview)

Inline Documentation

  • Python: Use Google-style docstrings
  • TypeScript: Use JSDoc comments
def calculate_similarity( query_embedding: List[float], document_embedding: List[float] ) -> float: """Calculate cosine similarity between two embeddings. Args: query_embedding: Query vector embedding document_embedding: Document vector embedding Returns: Similarity score between 0 and 1 Example: >>> query_emb = [0.1, 0.2, 0.3] >>> doc_emb = [0.2, 0.3, 0.4] >>> similarity = calculate_similarity(query_emb, doc_emb) >>> print(f"Similarity: {similarity:.2f}") """
/** * Fetch user profile data * * @param userId - The unique user identifier * @returns Promise resolving to User object * @throws {NotFoundError} If user doesn't exist * * @example * ```typescript * const user = await fetchUser('user-123'); * console.log(user.email); * ``` */ async function fetchUser(userId: string): Promise<User> { // Implementation }

Questions or Issues?

  • General questions: Open a GitHub Discussion
  • Bug reports: Open a GitHub Issue (use bug template)
  • Feature requests: Open a GitHub Issue (use feature template)
  • Security issues: Email security team (DO NOT open public issue)

Thank you for contributing to VoiceAssist! 🎉

Beginning of guide
End of guide