Development Guide
This is the full development guide for PR Tracker, covering project structure, conventions, workflow, and common tasks.
What Is This Project?
PR Tracker is an agent-first CRM for public relations work. It has no GUI -- all interaction happens through MCP (Model Context Protocol) tools called by AI agents (Claude Cowork, Claude Code, OpenClaw, etc.).
The system tracks: outlets (media companies, podcasts, publishers), channels (website, newsletter, podcast, TV, print, event, award), contacts (journalists, editors, producers), campaigns, opportunities (the pipeline/deal object), coverage (results), and media (article text, video, audio clips).
See the Product Specification for the full data model and tool inventory.
Architecture
Agents → Cloudflare Worker (MCP Server, TypeScript) → Supabase (Postgres + Auth + Storage)
↓
Slack (notifications)- Supabase: Database, Google OAuth, media file storage
- Cloudflare Worker: MCP server -- handles agent tool calls over SSE
- Slack Webhooks: Notifications when opportunities change stage
Key Architectural Decisions
- Agent-first, no GUI: MCP tools are the only interface. Every operation must be available as a tool.
- Non-destructive by design: No hard deletes. All records use soft delete (
archived_at). Every mutation is logged via Postgres triggers tochange_logwith full before/after state. Rollback is supported per-change or per-agent-session. - Agent identity tracking: Every DB mutation records
user_id,agent_id, andagent_sessionfor auditability and safe rollback. - Single company: No multi-tenancy. RLS is simple -- all authenticated users see everything.
- Email capture via shared inbox: Team members CC/BCC a system email address. A cron job syncs the inbox and matches messages to known contacts only. No individual inbox access required.
Project Structure
pr-tracker/
├── CLAUDE.md # Development guide (this content)
├── SPEC.md # Full product specification
├── supabase/
│ ├── config.toml # Supabase local config
│ ├── migrations/ # SQL migrations (sequential, timestamped)
│ │ ├── 00001_initial_schema.sql
│ │ └── ...
│ └── seed.sql # Seed data (General campaign, etc.)
├── mcp-server/
│ ├── package.json
│ ├── tsconfig.json
│ ├── wrangler.toml # Cloudflare Worker config
│ └── src/
│ ├── index.ts # Worker entry point, MCP server setup
│ ├── auth.ts # API key validation, session context
│ ├── tools/ # One file per tool group
│ │ ├── outlets.ts
│ │ ├── contacts.ts
│ │ ├── channels.ts
│ │ ├── campaigns.ts
│ │ ├── opportunities.ts
│ │ ├── coverage.ts
│ │ ├── media.ts
│ │ ├── search.ts
│ │ ├── emails.ts
│ │ ├── notes.ts
│ │ ├── audit.ts
│ │ └── notifications.ts
│ ├── email-sync.ts # Gmail API cron sync logic
│ ├── db.ts # Supabase client setup
│ └── types.ts # Shared TypeScript types
├── scripts/
│ ├── create-agent-key.ts # CLI script to generate agent API keys
│ └── setup.ts # Initial setup script
└── .env.local # Local env vars (not committed)Development Workflow
Prerequisites
npm install -g supabase wranglerLocal Development
# Start local Supabase
supabase start
# Run migrations
supabase db reset # applies all migrations + seed data
# Start MCP server locally
cd mcp-server
npm install
npm run dev # wrangler devDatabase Migrations
All schema changes go through Supabase migrations:
# Create a new migration
supabase migration new <description>
# Edit the generated SQL file in supabase/migrations/
# Apply locally
supabase db reset
# Push to production
supabase db pushMigration conventions:
- One migration per logical change
- Migrations are forward-only (no down migrations -- use new migrations to revert)
- Always include change_log triggers for new tables
- Never drop tables -- archive/deprecate instead
- Test migrations locally before pushing to production
Deploying the MCP Server
cd mcp-server
wrangler deployEnvironment Variables
Set Cloudflare Worker secrets:
wrangler secret put SUPABASE_URL
wrangler secret put SUPABASE_SERVICE_ROLE_KEY
wrangler secret put SLACK_WEBHOOK_URLCoding Conventions
TypeScript (MCP Server)
- Use the
@modelcontextprotocol/sdkTypeScript SDK - Use
@supabase/supabase-jsfor database access - One file per tool group in
src/tools/ - Each tool handler: validate input with Zod, set session context (
app.user_id,app.agent_id), call Supabase, return structured JSON - Tool names use
snake_case(e.g.,create_outlet,list_opportunities_by_stage) - Return meaningful error messages -- agents need to understand what went wrong
SQL (Migrations)
- Table names are singular
snake_case(e.g.,outlet,contact_outlet) - Use UUIDs for all primary keys (
gen_random_uuid()) - All tables with user data get:
archived_at,created_at,updated_at - All tables get the
log_change()trigger - Use enums for fixed sets (channel_type, opportunity_stage)
- Use JSONB for flexible/extensible attributes (channel metrics, social links)
- Index foreign keys and commonly filtered columns
Soft Deletes
- Archive:
UPDATE table SET archived_at = now() WHERE id = $1 - Restore:
UPDATE table SET archived_at = NULL WHERE id = $1 - Default queries exclude archived:
WHERE archived_at IS NULL - Explicit
include_archivedparameter on list/search tools
Change Log
Every table mutation is automatically logged by the log_change() Postgres trigger. The MCP server sets session variables before each operation:
SET LOCAL app.user_id = '<user-uuid>';
SET LOCAL app.agent_id = '<agent-identifier>';
SET LOCAL app.agent_session = '<session-id>';Common Tasks
Adding a New Table
- Create migration:
supabase migration new add_<table_name> - Write CREATE TABLE with standard columns (id, archived_at, created_at, updated_at)
- Add
log_change()trigger - Add indexes on foreign keys
- Add tool handlers in
mcp-server/src/tools/ - Add types in
mcp-server/src/types.ts
Adding a New MCP Tool
- Add handler function in the appropriate
src/tools/*.tsfile - Define Zod input schema
- Register tool in
src/index.ts - Test locally with
wrangler dev
Schema Changes
- Create migration:
supabase migration new <description> - Write ALTER TABLE statements
- If adding columns with defaults, consider backfilling
- Test:
supabase db reset - Deploy:
supabase db push
Checking Change Log
-- Recent changes by a specific agent
SELECT * FROM change_log
WHERE agent_id = 'claude-cowork'
ORDER BY created_at DESC
LIMIT 20;
-- All changes to a specific record
SELECT * FROM change_log
WHERE table_name = 'opportunity' AND record_id = '<uuid>'
ORDER BY created_at;
-- Bulk changes in a session (for potential rollback)
SELECT * FROM change_log
WHERE agent_session = '<session-id>'
ORDER BY created_at;Important Reminders
- Keep SPEC.md current -- when adding or changing tables, tools, enums, or features, update SPEC.md in the same commit
- Never hard-delete data -- always use soft delete (archive)
- Always include change_log triggers on new tables
- Set session context (user_id, agent_id, agent_session) before every DB operation in the MCP server
- Test migrations locally with
supabase db resetbefore pushing to production - Channel attributes are JSONB -- don't over-normalize channel-specific fields
- The General campaign always exists and is the default for new opportunities
- Media files go to Supabase Storage bucket
pr-media, metadata goes to themediatable - Email sync only stores emails involving known contacts -- unknown addresses are ignored
- Email dedup uses Gmail message ID and thread ID as unique constraints
- Gmail credentials (refresh token) are stored as Cloudflare Worker secrets, never in code