Skip to content

Development Guide

This is the full development guide for PR Tracker, covering project structure, conventions, workflow, and common tasks.


What Is This Project?

PR Tracker is an agent-first CRM for public relations work. It has no GUI -- all interaction happens through MCP (Model Context Protocol) tools called by AI agents (Claude Cowork, Claude Code, OpenClaw, etc.).

The system tracks: outlets (media companies, podcasts, publishers), channels (website, newsletter, podcast, TV, print, event, award), contacts (journalists, editors, producers), campaigns, opportunities (the pipeline/deal object), coverage (results), and media (article text, video, audio clips).

See the Product Specification for the full data model and tool inventory.

Architecture

Agents → Cloudflare Worker (MCP Server, TypeScript) → Supabase (Postgres + Auth + Storage)

                                                    Slack (notifications)
  • Supabase: Database, Google OAuth, media file storage
  • Cloudflare Worker: MCP server -- handles agent tool calls over SSE
  • Slack Webhooks: Notifications when opportunities change stage

Key Architectural Decisions

  1. Agent-first, no GUI: MCP tools are the only interface. Every operation must be available as a tool.
  2. Non-destructive by design: No hard deletes. All records use soft delete (archived_at). Every mutation is logged via Postgres triggers to change_log with full before/after state. Rollback is supported per-change or per-agent-session.
  3. Agent identity tracking: Every DB mutation records user_id, agent_id, and agent_session for auditability and safe rollback.
  4. Single company: No multi-tenancy. RLS is simple -- all authenticated users see everything.
  5. Email capture via shared inbox: Team members CC/BCC a system email address. A cron job syncs the inbox and matches messages to known contacts only. No individual inbox access required.

Project Structure

pr-tracker/
├── CLAUDE.md              # Development guide (this content)
├── SPEC.md                # Full product specification
├── supabase/
│   ├── config.toml        # Supabase local config
│   ├── migrations/        # SQL migrations (sequential, timestamped)
│   │   ├── 00001_initial_schema.sql
│   │   └── ...
│   └── seed.sql           # Seed data (General campaign, etc.)
├── mcp-server/
│   ├── package.json
│   ├── tsconfig.json
│   ├── wrangler.toml      # Cloudflare Worker config
│   └── src/
│       ├── index.ts        # Worker entry point, MCP server setup
│       ├── auth.ts         # API key validation, session context
│       ├── tools/          # One file per tool group
│       │   ├── outlets.ts
│       │   ├── contacts.ts
│       │   ├── channels.ts
│       │   ├── campaigns.ts
│       │   ├── opportunities.ts
│       │   ├── coverage.ts
│       │   ├── media.ts
│       │   ├── search.ts
│       │   ├── emails.ts
│       │   ├── notes.ts
│       │   ├── audit.ts
│       │   └── notifications.ts
│       ├── email-sync.ts   # Gmail API cron sync logic
│       ├── db.ts           # Supabase client setup
│       └── types.ts        # Shared TypeScript types
├── scripts/
│   ├── create-agent-key.ts # CLI script to generate agent API keys
│   └── setup.ts            # Initial setup script
└── .env.local              # Local env vars (not committed)

Development Workflow

Prerequisites

bash
npm install -g supabase wrangler

Local Development

bash
# Start local Supabase
supabase start

# Run migrations
supabase db reset  # applies all migrations + seed data

# Start MCP server locally
cd mcp-server
npm install
npm run dev  # wrangler dev

Database Migrations

All schema changes go through Supabase migrations:

bash
# Create a new migration
supabase migration new <description>

# Edit the generated SQL file in supabase/migrations/

# Apply locally
supabase db reset

# Push to production
supabase db push

Migration conventions:

  • One migration per logical change
  • Migrations are forward-only (no down migrations -- use new migrations to revert)
  • Always include change_log triggers for new tables
  • Never drop tables -- archive/deprecate instead
  • Test migrations locally before pushing to production

Deploying the MCP Server

bash
cd mcp-server
wrangler deploy

Environment Variables

Set Cloudflare Worker secrets:

bash
wrangler secret put SUPABASE_URL
wrangler secret put SUPABASE_SERVICE_ROLE_KEY
wrangler secret put SLACK_WEBHOOK_URL

Coding Conventions

TypeScript (MCP Server)

  • Use the @modelcontextprotocol/sdk TypeScript SDK
  • Use @supabase/supabase-js for database access
  • One file per tool group in src/tools/
  • Each tool handler: validate input with Zod, set session context (app.user_id, app.agent_id), call Supabase, return structured JSON
  • Tool names use snake_case (e.g., create_outlet, list_opportunities_by_stage)
  • Return meaningful error messages -- agents need to understand what went wrong

SQL (Migrations)

  • Table names are singular snake_case (e.g., outlet, contact_outlet)
  • Use UUIDs for all primary keys (gen_random_uuid())
  • All tables with user data get: archived_at, created_at, updated_at
  • All tables get the log_change() trigger
  • Use enums for fixed sets (channel_type, opportunity_stage)
  • Use JSONB for flexible/extensible attributes (channel metrics, social links)
  • Index foreign keys and commonly filtered columns

Soft Deletes

  • Archive: UPDATE table SET archived_at = now() WHERE id = $1
  • Restore: UPDATE table SET archived_at = NULL WHERE id = $1
  • Default queries exclude archived: WHERE archived_at IS NULL
  • Explicit include_archived parameter on list/search tools

Change Log

Every table mutation is automatically logged by the log_change() Postgres trigger. The MCP server sets session variables before each operation:

sql
SET LOCAL app.user_id = '<user-uuid>';
SET LOCAL app.agent_id = '<agent-identifier>';
SET LOCAL app.agent_session = '<session-id>';

Common Tasks

Adding a New Table

  1. Create migration: supabase migration new add_<table_name>
  2. Write CREATE TABLE with standard columns (id, archived_at, created_at, updated_at)
  3. Add log_change() trigger
  4. Add indexes on foreign keys
  5. Add tool handlers in mcp-server/src/tools/
  6. Add types in mcp-server/src/types.ts

Adding a New MCP Tool

  1. Add handler function in the appropriate src/tools/*.ts file
  2. Define Zod input schema
  3. Register tool in src/index.ts
  4. Test locally with wrangler dev

Schema Changes

  1. Create migration: supabase migration new <description>
  2. Write ALTER TABLE statements
  3. If adding columns with defaults, consider backfilling
  4. Test: supabase db reset
  5. Deploy: supabase db push

Checking Change Log

sql
-- Recent changes by a specific agent
SELECT * FROM change_log
WHERE agent_id = 'claude-cowork'
ORDER BY created_at DESC
LIMIT 20;

-- All changes to a specific record
SELECT * FROM change_log
WHERE table_name = 'opportunity' AND record_id = '<uuid>'
ORDER BY created_at;

-- Bulk changes in a session (for potential rollback)
SELECT * FROM change_log
WHERE agent_session = '<session-id>'
ORDER BY created_at;

Important Reminders

  • Keep SPEC.md current -- when adding or changing tables, tools, enums, or features, update SPEC.md in the same commit
  • Never hard-delete data -- always use soft delete (archive)
  • Always include change_log triggers on new tables
  • Set session context (user_id, agent_id, agent_session) before every DB operation in the MCP server
  • Test migrations locally with supabase db reset before pushing to production
  • Channel attributes are JSONB -- don't over-normalize channel-specific fields
  • The General campaign always exists and is the default for new opportunities
  • Media files go to Supabase Storage bucket pr-media, metadata goes to the media table
  • Email sync only stores emails involving known contacts -- unknown addresses are ignored
  • Email dedup uses Gmail message ID and thread ID as unique constraints
  • Gmail credentials (refresh token) are stored as Cloudflare Worker secrets, never in code