Initial commit: Prompt Performance Analytics Dashboard

- Agent 1: Prompt Analyzer (FastAPI + Anthropic Claude)
  - Scores prompts on 5 dimensions (clarity, token efficiency,
    goal alignment, structure, vagueness)
  - Generates optimized rewrites with token savings
  - Per-project Context Store for pattern learning

- Agent 2: Analytics Reporter
  - SQLite storage with async writes
  - Human vs Agent mode tracking
  - Rewrite adoption metrics

- MCP Server
  - analyze_prompt and get_analysis_history tools
  - Claude Desktop integration via claude_desktop_config.json

- Dashboard UI
  - Live KPIs, quality trend chart, common mistakes donut chart
  - Interaction feed with per-project filtering

- CLI test script (test_agent.py) for direct API testing
This commit is contained in:
Ananya 2026-02-22 16:56:52 +05:30
commit 704cccf729
24 changed files with 4160 additions and 0 deletions

4
.env.example Normal file
View File

@ -0,0 +1,4 @@
ANTHROPIC_API_KEY=sk-ant-your-key-here
ANTHROPIC_MODEL=claude-sonnet-4-20250514
LLM_MAX_TOKENS=4096
LLM_TEMPERATURE=0.3

11
.gitignore vendored Normal file
View File

@ -0,0 +1,11 @@
__pycache__/
*.pyc
.env
context_store/
*.db
.venv/
venv/
node_modules/
.DS_Store
*.pptx
make_presentation.py

BIN
PROJECT_BRIEF.docx Normal file

Binary file not shown.

70
README.md Normal file
View File

@ -0,0 +1,70 @@
# Prompt Performance Analytics
A two-agent system for analyzing, scoring, and improving AI prompts — for both humans and enterprise multi-agent systems.
## Quick Start
```bash
# 1. Install dependencies
pip install -r requirements.txt
# 2. Set up Anthropic API key
cp .env.example .env
# Edit .env with your Anthropic API key
# 3. Run the server
uvicorn backend.main:app --reload --port 8000
# 4. Open in browser
# Analyzer: http://localhost:8000/
# Dashboard: http://localhost:8000/dashboard-ui
```
## Architecture
```
Agent 1: Prompt Analyzer ──→ Agent 2: Analytics Reporter ──→ Dashboard
↑ ↑
├── REST API (humans) └── SQLite DB
└── MCP Server (agents)
```
### Agent 1: Prompt Analyzer
Scores prompts on 5 dimensions (clarity, token efficiency, goal alignment, structure, vagueness), identifies mistakes, and generates optimized rewrites. Uses Anthropic Claude via the official SDK.
### Agent 2: Analytics Reporter
Aggregates all analyses into trends, mistake frequencies, and agent rankings, then serves data to the dashboard.
### Context Store
Per-project isolated memory. Each project's history, patterns, and agent profiles are stored separately — no cross-contamination.
## API Endpoints
| Method | Endpoint | Description |
|--------|----------|-------------|
| POST | `/analyze` | Analyze a prompt |
| POST | `/rewrite-choice` | Record rewrite acceptance |
| GET | `/dashboard/overview` | KPI overview |
| GET | `/dashboard/interactions` | Paginated interaction feed |
| GET | `/dashboard/trends?days=N&hours=N` | Quality score trends |
| GET | `/dashboard/mistakes` | Common mistake types |
| GET | `/dashboard/agents` | Agent leaderboard |
## MCP Server (for agent-to-agent)
```bash
python -m mcp_server.server
```
Tools exposed:
- `analyze_prompt` — analyze a prompt with optional project context
- `get_analysis_history` — retrieve past analyses
## Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `ANTHROPIC_API_KEY` | Anthropic API key | — |
| `ANTHROPIC_MODEL` | Claude model to use | claude-sonnet-4-20250514 |
| `LLM_MAX_TOKENS` | Max output tokens | 4096 |
| `LLM_TEMPERATURE` | Generation temperature | 0.3 |

View File

@ -0,0 +1,4 @@
"""Analytics Reporter package."""
from analytics_reporter.reporter import AnalyticsReporter
__all__ = ["AnalyticsReporter"]

288
analytics_reporter/db.py Normal file
View File

@ -0,0 +1,288 @@
"""SQLite database for storing analysis results and aggregations."""
import aiosqlite
import json
import logging
from typing import Optional
from prompt_analyzer.config import ANALYTICS_DB_PATH
logger = logging.getLogger(__name__)
DB_PATH = ANALYTICS_DB_PATH
CREATE_TABLE_SQL = """
CREATE TABLE IF NOT EXISTS analyses (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp TEXT NOT NULL,
mode TEXT NOT NULL DEFAULT 'human',
source_agent TEXT,
target_agent TEXT,
project_id TEXT,
original_prompt TEXT NOT NULL,
rewritten_prompt TEXT,
overall_score INTEGER NOT NULL DEFAULT 0,
clarity INTEGER NOT NULL DEFAULT 0,
token_efficiency INTEGER NOT NULL DEFAULT 0,
goal_alignment INTEGER NOT NULL DEFAULT 0,
structure INTEGER NOT NULL DEFAULT 0,
vagueness_index INTEGER NOT NULL DEFAULT 0,
mistake_count INTEGER NOT NULL DEFAULT 0,
mistakes_json TEXT,
original_tokens INTEGER NOT NULL DEFAULT 0,
rewritten_tokens INTEGER NOT NULL DEFAULT 0,
token_savings_percent REAL NOT NULL DEFAULT 0.0,
rewrite_used INTEGER,
full_result_json TEXT
);
"""
CREATE_INDEX_SQL = [
"CREATE INDEX IF NOT EXISTS idx_timestamp ON analyses(timestamp);",
"CREATE INDEX IF NOT EXISTS idx_project ON analyses(project_id);",
"CREATE INDEX IF NOT EXISTS idx_source ON analyses(source_agent);",
]
async def init_db():
"""Initialize the database and create tables."""
async with aiosqlite.connect(DB_PATH) as db:
await db.execute(CREATE_TABLE_SQL)
for idx_sql in CREATE_INDEX_SQL:
await db.execute(idx_sql)
await db.commit()
logger.info("Database initialized at %s", DB_PATH)
async def store_analysis(result_dict: dict) -> int:
"""Store an analysis result and return its ID."""
scores = result_dict.get("scores", {})
meta = result_dict.get("metadata", {})
tc = result_dict.get("token_comparison", {})
def _get_score(dim: str) -> int:
val = scores.get(dim, {})
if isinstance(val, dict):
return val.get("score", 0)
return 0
async with aiosqlite.connect(DB_PATH) as db:
cursor = await db.execute(
"""INSERT INTO analyses (
timestamp, mode, source_agent, target_agent, project_id,
original_prompt, rewritten_prompt,
overall_score, clarity, token_efficiency, goal_alignment,
structure, vagueness_index,
mistake_count, mistakes_json,
original_tokens, rewritten_tokens, token_savings_percent,
full_result_json
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
(
meta.get("timestamp", ""),
meta.get("mode", "human"),
meta.get("source_agent"),
meta.get("target_agent"),
meta.get("project_id"),
result_dict.get("original_prompt", ""),
result_dict.get("rewritten_prompt", ""),
result_dict.get("overall_score", 0),
_get_score("clarity"),
_get_score("token_efficiency"),
_get_score("goal_alignment"),
_get_score("structure"),
_get_score("vagueness_index"),
len(result_dict.get("mistakes", [])),
json.dumps(result_dict.get("mistakes", []), default=str),
tc.get("original_tokens", 0),
tc.get("rewritten_tokens", 0),
tc.get("savings_percent", 0.0),
json.dumps(result_dict, default=str),
),
)
await db.commit()
row_id = cursor.lastrowid
logger.info("Stored analysis id=%d", row_id)
return row_id
async def get_interactions(
limit: int = 50,
offset: int = 0,
project_id: Optional[str] = None,
) -> list[dict]:
"""Get paginated interaction rows."""
async with aiosqlite.connect(DB_PATH) as db:
db.row_factory = aiosqlite.Row
if project_id:
cursor = await db.execute(
"SELECT * FROM analyses WHERE project_id = ? ORDER BY id DESC LIMIT ? OFFSET ?",
(project_id, limit, offset),
)
else:
cursor = await db.execute(
"SELECT * FROM analyses ORDER BY id DESC LIMIT ? OFFSET ?",
(limit, offset),
)
rows = await cursor.fetchall()
return [dict(row) for row in rows]
async def get_total_count(project_id: Optional[str] = None) -> int:
"""Get total interaction count."""
async with aiosqlite.connect(DB_PATH) as db:
if project_id:
cursor = await db.execute(
"SELECT COUNT(*) FROM analyses WHERE project_id = ?",
(project_id,),
)
else:
cursor = await db.execute("SELECT COUNT(*) FROM analyses")
row = await cursor.fetchone()
return row[0] if row else 0
async def get_overview_stats() -> dict:
"""Get aggregate stats for the dashboard overview."""
async with aiosqlite.connect(DB_PATH) as db:
cursor = await db.execute("""
SELECT
COUNT(*) as total,
SUM(CASE WHEN mode = 'human' THEN 1 ELSE 0 END) as human_count,
SUM(CASE WHEN mode = 'agent' THEN 1 ELSE 0 END) as agent_count,
AVG(overall_score) as avg_score,
AVG(token_savings_percent) as avg_savings,
SUM(mistake_count) as total_mistakes,
SUM(CASE WHEN rewrite_used = 1 THEN 1 ELSE 0 END) as rewrites_used,
SUM(CASE WHEN rewrite_used IS NOT NULL THEN 1 ELSE 0 END) as rewrites_decided,
SUM(original_tokens) as total_tokens,
AVG(original_tokens) as avg_tokens
FROM analyses
""")
row = await cursor.fetchone()
if not row or row[0] == 0:
return {
"total_interactions": 0,
"human_count": 0,
"agent_count": 0,
"avg_overall_score": 0,
"avg_token_savings": 0,
"rewrite_acceptance_rate": 0,
"total_mistakes_found": 0,
"total_tokens": 0,
"avg_tokens_per_prompt": 0,
}
return {
"total_interactions": row[0],
"human_count": row[1] or 0,
"agent_count": row[2] or 0,
"avg_overall_score": round(row[3] or 0, 1),
"avg_token_savings": round(row[4] or 0, 1),
"rewrite_acceptance_rate": round(
(row[5] / row[6] * 100) if row[6] and row[6] > 0 else 0, 1
),
"total_mistakes_found": row[7] or 0,
"total_tokens": row[8] or 0,
"avg_tokens_per_prompt": round(row[9] or 0, 1),
}
async def get_trends(hours: int = None, days: int = 30) -> list[dict]:
"""Get score trends over time. If hours is set, group by hour; otherwise by day."""
async with aiosqlite.connect(DB_PATH) as db:
if hours is not None:
# Group by hour for short time ranges
cursor = await db.execute(
"""
SELECT
strftime('%Y-%m-%d %H:00', timestamp) as period,
AVG(overall_score) as avg_score,
COUNT(*) as count
FROM analyses
WHERE timestamp >= datetime('now', ?)
GROUP BY strftime('%Y-%m-%d %H:00', timestamp)
ORDER BY period ASC
""",
(f"-{hours} hours",),
)
else:
# Group by day for longer ranges
cursor = await db.execute(
"""
SELECT
DATE(timestamp) as period,
AVG(overall_score) as avg_score,
COUNT(*) as count
FROM analyses
WHERE timestamp >= datetime('now', ?)
GROUP BY DATE(timestamp)
ORDER BY period ASC
""",
(f"-{days} days",),
)
rows = await cursor.fetchall()
return [
{"date": row[0], "avg_score": round(row[1], 1), "count": row[2]}
for row in rows
]
async def get_mistake_frequencies(limit: int = 10) -> list[dict]:
"""Get the most common mistake types."""
async with aiosqlite.connect(DB_PATH) as db:
cursor = await db.execute("SELECT mistakes_json FROM analyses WHERE mistakes_json IS NOT NULL")
rows = await cursor.fetchall()
counts: dict[str, int] = {}
for row in rows:
try:
mistakes = json.loads(row[0])
for m in mistakes:
mt = m.get("type", "unknown")
counts[mt] = counts.get(mt, 0) + 1
except (json.JSONDecodeError, TypeError):
continue
total = sum(counts.values()) or 1
sorted_counts = sorted(counts.items(), key=lambda x: -x[1])[:limit]
return [
{"type": k, "count": v, "percentage": round(v / total * 100, 1)}
for k, v in sorted_counts
]
async def get_agent_leaderboard() -> list[dict]:
"""Get per-agent statistics."""
async with aiosqlite.connect(DB_PATH) as db:
cursor = await db.execute("""
SELECT
source_agent,
COUNT(*) as total_prompts,
AVG(overall_score) as avg_score
FROM analyses
WHERE source_agent IS NOT NULL
GROUP BY source_agent
ORDER BY avg_score DESC
""")
rows = await cursor.fetchall()
results = []
for row in rows:
results.append({
"agent_id": row[0],
"total_prompts": row[1],
"avg_score": round(row[2], 1),
"weakest_dimension": None,
"most_common_mistake": None,
"improvement_trend": "",
})
return results
async def mark_rewrite_used(analysis_id: int, used: bool) -> None:
"""Mark whether the user chose the rewritten prompt."""
async with aiosqlite.connect(DB_PATH) as db:
await db.execute(
"UPDATE analyses SET rewrite_used = ? WHERE id = ?",
(1 if used else 0, analysis_id),
)
await db.commit()

View File

@ -0,0 +1,58 @@
"""Pydantic models for dashboard data."""
from __future__ import annotations
from pydantic import BaseModel, Field
class DashboardOverview(BaseModel):
"""Top-level KPI data for the dashboard."""
total_interactions: int = 0
human_count: int = 0
agent_count: int = 0
avg_overall_score: float = 0.0
avg_token_savings: float = 0.0
rewrite_acceptance_rate: float = 0.0
total_mistakes_found: int = 0
class TrendPoint(BaseModel):
"""A single point in a time-series trend."""
date: str
avg_score: float
count: int
class MistakeFrequency(BaseModel):
"""How often a mistake type appears."""
type: str
count: int
percentage: float
class AgentStats(BaseModel):
"""Stats for a single agent across all its interactions."""
agent_id: str
total_prompts: int = 0
avg_score: float = 0.0
weakest_dimension: str | None = None
most_common_mistake: str | None = None
improvement_trend: str = "" # ↑ ↓ —
class InteractionRow(BaseModel):
"""A single row in the interaction feed."""
id: int
timestamp: str
source: str # "human" or agent name
target: str | None = None
project_id: str | None = None
prompt_preview: str
overall_score: int
clarity: int
token_efficiency: int
goal_alignment: int
structure: int
vagueness_index: int
mistake_count: int
token_savings: float
rewrite_used: bool | None = None

View File

@ -0,0 +1,78 @@
"""
Agent 2: Analytics Reporter
Receives analysis results from Agent 1 (Prompt Analyzer),
aggregates them, and stores them for the dashboard.
"""
import logging
from prompt_analyzer.models import AnalysisResult
from analytics_reporter import db
logger = logging.getLogger(__name__)
class AnalyticsReporter:
"""
Collects analysis results, stores them in the database,
and provides aggregated data for the dashboard.
"""
async def initialize(self) -> None:
"""Initialize the database."""
await db.init_db()
logger.info("AnalyticsReporter initialized")
async def report(self, result: AnalysisResult) -> int:
"""
Process and store an analysis result.
Args:
result: The AnalysisResult from the Prompt Analyzer
Returns:
The database ID of the stored analysis
"""
result_dict = result.model_dump(mode="json")
analysis_id = await db.store_analysis(result_dict)
logger.info(
"Reported analysis id=%d score=%d project=%s",
analysis_id,
result.overall_score,
result.metadata.project_id,
)
return analysis_id
async def get_overview(self) -> dict:
"""Get dashboard overview KPIs."""
return await db.get_overview_stats()
async def get_interactions(
self, limit: int = 50, offset: int = 0, project_id: str = None
) -> dict:
"""Get paginated interaction feed."""
rows = await db.get_interactions(limit, offset, project_id)
total = await db.get_total_count(project_id)
return {
"interactions": rows,
"total": total,
"limit": limit,
"offset": offset,
}
async def get_trends(self, days: int = 30, hours: int = None) -> list[dict]:
"""Get score trends over time."""
return await db.get_trends(hours=hours, days=days)
async def get_mistake_frequencies(self, limit: int = 10) -> list[dict]:
"""Get most common mistake types."""
return await db.get_mistake_frequencies(limit)
async def get_agent_leaderboard(self) -> list[dict]:
"""Get per-agent statistics."""
return await db.get_agent_leaderboard()
async def mark_rewrite_choice(self, analysis_id: int, used: bool) -> None:
"""Record whether the user chose the rewritten prompt."""
await db.mark_rewrite_used(analysis_id, used)
logger.info("Rewrite choice recorded: id=%d used=%s", analysis_id, used)

169
backend/main.py Normal file
View File

@ -0,0 +1,169 @@
"""
FastAPI backend REST API for the Prompt Analyzer and Dashboard.
Serves:
- POST /analyze Human prompt analysis
- POST /rewrite-choice Record rewrite acceptance
- GET /dashboard/* Dashboard data endpoints
- GET /health Health check
"""
import logging
from contextlib import asynccontextmanager
from typing import Optional
from fastapi import FastAPI, HTTPException, Query
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
from fastapi.responses import FileResponse
from pydantic import BaseModel
from prompt_analyzer import PromptAnalyzer
from prompt_analyzer.models import AnalyzeRequest
from analytics_reporter.reporter import AnalyticsReporter
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(name)s %(levelname)s %(message)s")
logger = logging.getLogger(__name__)
# Shared instances
analyzer = PromptAnalyzer()
reporter = AnalyticsReporter()
@asynccontextmanager
async def lifespan(app: FastAPI):
"""Initialize services on startup."""
await reporter.initialize()
logger.info("Backend ready")
yield
logger.info("Backend shutting down")
app = FastAPI(
title="Prompt Performance Analytics",
version="0.1.0",
lifespan=lifespan,
)
# CORS — allow frontend to call from any origin in dev
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# ── Analysis Endpoints ─────────────────────────────────────────
@app.post("/analyze")
async def analyze_prompt(request: AnalyzeRequest):
"""
Analyze a prompt and return quality scores, mistakes, and rewrite.
This is the main endpoint used by both the Web UI and external clients.
"""
try:
result = await analyzer.analyze(
prompt=request.prompt,
context=request.context,
project_id=request.project_id,
source_agent=request.source_agent,
target_agent=request.target_agent,
)
# Agent 2: store the result for dashboard
analysis_id = await reporter.report(result)
# Return result with the DB id so frontend can track rewrite choice
response = result.model_dump(mode="json")
response["analysis_id"] = analysis_id
return response
except Exception as e:
logger.error("Analysis failed: %s", str(e), exc_info=True)
raise HTTPException(status_code=500, detail=f"Analysis failed: {str(e)}")
class RewriteChoiceRequest(BaseModel):
analysis_id: int
used_rewrite: bool
@app.post("/rewrite-choice")
async def record_rewrite_choice(request: RewriteChoiceRequest):
"""Record whether the user chose the rewritten prompt."""
try:
await reporter.mark_rewrite_choice(request.analysis_id, request.used_rewrite)
return {"status": "ok"}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
# ── Dashboard Endpoints ────────────────────────────────────────
@app.get("/dashboard/overview")
async def dashboard_overview():
"""Get dashboard KPI overview."""
return await reporter.get_overview()
@app.get("/dashboard/interactions")
async def dashboard_interactions(
limit: int = Query(default=50, ge=1, le=200),
offset: int = Query(default=0, ge=0),
project_id: Optional[str] = Query(default=None),
):
"""Get paginated interaction feed."""
return await reporter.get_interactions(limit, offset, project_id)
@app.get("/dashboard/trends")
async def dashboard_trends(
days: int = Query(default=30, ge=1, le=365),
hours: Optional[int] = Query(default=None, ge=1, le=720),
):
"""Get quality score trends over time. Use hours for short ranges."""
return await reporter.get_trends(days=days, hours=hours)
@app.get("/dashboard/mistakes")
async def dashboard_mistakes(limit: int = Query(default=10, ge=1, le=50)):
"""Get most common mistake types."""
return await reporter.get_mistake_frequencies(limit)
@app.get("/dashboard/agents")
async def dashboard_agents():
"""Get agent leaderboard."""
return await reporter.get_agent_leaderboard()
# ── Static files & Health ──────────────────────────────────────
@app.get("/health")
async def health():
return {"status": "healthy", "version": "0.1.0"}
# Serve frontend static files
import os
frontend_dir = os.path.join(os.path.dirname(os.path.dirname(__file__)), "frontend")
dashboard_dir = os.path.join(os.path.dirname(os.path.dirname(__file__)), "dashboard")
if os.path.isdir(frontend_dir):
@app.get("/", response_class=FileResponse)
async def serve_frontend():
return FileResponse(os.path.join(frontend_dir, "index.html"))
app.mount("/static", StaticFiles(directory=frontend_dir), name="frontend")
if os.path.isdir(dashboard_dir):
@app.get("/dashboard-ui", response_class=FileResponse)
async def serve_dashboard():
return FileResponse(os.path.join(dashboard_dir, "index.html"))
app.mount("/dashboard-static", StaticFiles(directory=dashboard_dir), name="dashboard")

438
dashboard/app.js Normal file
View File

@ -0,0 +1,438 @@
/**
* Dashboard Frontend Logic
* Fetches data from /dashboard/* API endpoints and renders charts + tables.
*/
const API = window.location.origin;
let currentPage = 0;
const PAGE_SIZE = 20;
let allInteractions = [];
let trendChart = null;
let mistakesChart = null;
// ── Initialize ────────────────────────────────────────────────
document.addEventListener('DOMContentLoaded', () => {
refreshAll();
});
async function refreshAll() {
await Promise.all([
loadOverview(),
loadTrends(),
loadMistakes(),
loadInteractions(),
loadAgents(),
]);
}
// ── Overview KPIs ─────────────────────────────────────────────
async function loadOverview() {
try {
const res = await fetch(`${API}/dashboard/overview`);
const data = await res.json();
document.getElementById('kpi-total').textContent = data.total_interactions.toLocaleString();
document.getElementById('kpi-avg-score').textContent = `${data.avg_overall_score}%`;
document.getElementById('kpi-savings').textContent = `${data.avg_token_savings}%`;
document.getElementById('kpi-rewrite').textContent = `${data.rewrite_acceptance_rate}%`;
document.getElementById('kpi-split').textContent = `${data.human_count}H / ${data.agent_count}A`;
document.getElementById('kpi-total-tokens').textContent = data.total_tokens.toLocaleString();
document.getElementById('kpi-avg-tokens').textContent = Math.round(data.avg_tokens_per_prompt).toLocaleString();
} catch (e) {
console.error('Failed to load overview:', e);
}
}
// ── Trends Chart ──────────────────────────────────────────────
async function loadTrends(params = {}) {
try {
const url = new URL(`${API}/dashboard/trends`);
if (params.hours) {
url.searchParams.set('hours', params.hours);
} else {
url.searchParams.set('days', params.days || 30);
}
const res = await fetch(url);
const data = await res.json();
if (!data || data.length === 0) {
document.getElementById('trend-chart').style.display = 'none';
document.getElementById('trend-empty').classList.remove('hidden');
return;
}
document.getElementById('trend-chart').style.display = 'block';
document.getElementById('trend-empty').classList.add('hidden');
const ctx = document.getElementById('trend-chart').getContext('2d');
if (trendChart) trendChart.destroy();
// Format labels based on whether data is hourly or daily
const labels = data.map(d => {
if (d.date && d.date.includes(':')) {
// Hourly format: show time
const parts = d.date.split(' ');
return parts.length > 1 ? parts[1] : d.date;
}
return d.date;
});
// Calculate max interactions for axis scaling
const maxCount = Math.max(...data.map(d => d.count), 1);
trendChart = new Chart(ctx, {
type: 'line',
data: {
labels: labels,
datasets: [{
label: 'Avg Quality Score',
data: data.map(d => d.avg_score),
borderColor: '#3b82f6',
backgroundColor: 'rgba(59, 130, 246, 0.1)',
fill: true,
tension: 0.4,
pointRadius: 4,
pointBackgroundColor: '#3b82f6',
}, {
label: 'Interactions',
data: data.map(d => d.count),
borderColor: '#8b5cf6',
backgroundColor: 'rgba(139, 92, 246, 0.1)',
fill: false,
tension: 0.4,
pointRadius: 3,
pointBackgroundColor: '#8b5cf6',
yAxisID: 'y1',
}],
},
options: {
responsive: true,
maintainAspectRatio: false,
interaction: { mode: 'index', intersect: false },
plugins: {
legend: {
labels: { color: '#8899b4', font: { family: 'Inter', size: 12 } }
},
},
scales: {
x: {
ticks: { color: '#5a6a85', font: { size: 11 }, maxRotation: 45 },
grid: { color: 'rgba(42, 52, 82, 0.5)' },
},
y: {
min: 0, max: 100,
ticks: { color: '#5a6a85', font: { size: 11 } },
grid: { color: 'rgba(42, 52, 82, 0.5)' },
},
y1: {
position: 'right',
min: 0,
suggestedMax: maxCount + 1,
ticks: {
color: '#5a6a85',
font: { size: 11 },
stepSize: 1,
precision: 0,
},
grid: { display: false },
},
},
},
});
} catch (e) {
console.error('Failed to load trends:', e);
}
}
function setTrendFilter(btn) {
// Update active state
document.querySelectorAll('.filter-btn').forEach(b => b.classList.remove('active'));
btn.classList.add('active');
// Build params
const params = {};
if (btn.dataset.hours) {
params.hours = parseInt(btn.dataset.hours);
} else if (btn.dataset.days) {
params.days = parseInt(btn.dataset.days);
}
loadTrends(params);
}
// ── Mistakes Chart ────────────────────────────────────────────
async function loadMistakes() {
try {
const res = await fetch(`${API}/dashboard/mistakes?limit=6`);
const data = await res.json();
if (!data || data.length === 0) {
document.getElementById('mistakes-chart').style.display = 'none';
document.getElementById('mistakes-empty').classList.remove('hidden');
return;
}
document.getElementById('mistakes-chart').style.display = 'block';
document.getElementById('mistakes-empty').classList.add('hidden');
const ctx = document.getElementById('mistakes-chart').getContext('2d');
if (mistakesChart) mistakesChart.destroy();
const colors = ['#ef4444', '#f59e0b', '#3b82f6', '#8b5cf6', '#06b6d4', '#10b981'];
mistakesChart = new Chart(ctx, {
type: 'doughnut',
data: {
labels: data.map(d => formatMistakeType(d.type)),
datasets: [{
data: data.map(d => d.count),
backgroundColor: colors.slice(0, data.length),
borderColor: '#1a2235',
borderWidth: 3,
}],
},
options: {
responsive: true,
maintainAspectRatio: false,
plugins: {
legend: {
position: 'right',
labels: { color: '#8899b4', font: { family: 'Inter', size: 11 }, padding: 12 },
},
},
},
});
} catch (e) {
console.error('Failed to load mistakes:', e);
}
}
// ── Interactions Feed ─────────────────────────────────────────
async function loadInteractions() {
try {
const projectFilter = document.getElementById('feed-filter').value.trim() || null;
const url = new URL(`${API}/dashboard/interactions`);
url.searchParams.set('limit', PAGE_SIZE);
url.searchParams.set('offset', currentPage * PAGE_SIZE);
if (projectFilter) url.searchParams.set('project_id', projectFilter);
const res = await fetch(url);
const data = await res.json();
allInteractions = data.interactions;
const total = data.total;
renderFeed(allInteractions);
// Pagination
const totalPages = Math.ceil(total / PAGE_SIZE) || 1;
document.getElementById('page-info').textContent = `Page ${currentPage + 1} of ${totalPages}`;
document.getElementById('prev-btn').disabled = currentPage === 0;
document.getElementById('next-btn').disabled = (currentPage + 1) * PAGE_SIZE >= total;
} catch (e) {
console.error('Failed to load interactions:', e);
}
}
function renderFeed(rows) {
const tbody = document.getElementById('feed-body');
if (!rows || rows.length === 0) {
tbody.innerHTML = '<tr><td colspan="9" class="feed-empty">No interactions yet — go analyze some prompts!</td></tr>';
return;
}
tbody.innerHTML = rows.map(row => {
const score = row.overall_score;
const scoreClass = score >= 85 ? 'excellent' : score >= 65 ? 'good' : score >= 40 ? 'fair' : 'poor';
const source = row.source_agent
? `<span class="source-badge">🤖 ${escapeHtml(row.source_agent)}</span>`
: '<span class="source-badge">👤 Human</span>';
const preview = escapeHtml((row.original_prompt || '').substring(0, 60)) + (row.original_prompt && row.original_prompt.length > 60 ? '...' : '');
const rewrite = row.rewrite_used === 1 ? '✅' : row.rewrite_used === 0 ? '❌' : '—';
const time = formatTime(row.timestamp);
return `
<tr>
<td>${time}</td>
<td>${source}</td>
<td>${escapeHtml(row.project_id || '—')}</td>
<td style="max-width:200px;overflow:hidden;text-overflow:ellipsis;font-family:var(--font-mono);font-size:12px;">${preview}</td>
<td><span class="score-badge ${scoreClass}">${score}</span></td>
<td>${row.mistake_count}</td>
<td>${row.token_savings_percent}%</td>
<td>${rewrite}</td>
<td><button class="view-btn" onclick='viewDetail(${row.id})'>View</button></td>
</tr>
`;
}).join('');
}
function filterFeed() {
currentPage = 0;
loadInteractions();
}
function prevPage() {
if (currentPage > 0) { currentPage--; loadInteractions(); }
}
function nextPage() {
currentPage++;
loadInteractions();
}
// ── Agent Leaderboard ─────────────────────────────────────────
async function loadAgents() {
try {
const res = await fetch(`${API}/dashboard/agents`);
const data = await res.json();
const tbody = document.getElementById('agent-body');
if (!data || data.length === 0) {
tbody.innerHTML = '<tr><td colspan="5" class="feed-empty">No agent data yet</td></tr>';
return;
}
tbody.innerHTML = data.map((agent, i) => {
const scoreClass = agent.avg_score >= 85 ? 'excellent' : agent.avg_score >= 65 ? 'good' : agent.avg_score >= 40 ? 'fair' : 'poor';
return `
<tr>
<td style="font-weight:700;">#${i + 1}</td>
<td>🤖 ${escapeHtml(agent.agent_id)}</td>
<td>${agent.total_prompts}</td>
<td><span class="score-badge ${scoreClass}">${agent.avg_score}</span></td>
<td>${agent.improvement_trend}</td>
</tr>
`;
}).join('');
} catch (e) {
console.error('Failed to load agents:', e);
}
}
// ── Detail Modal ──────────────────────────────────────────────
async function viewDetail(id) {
// Find the row in current data
const row = allInteractions.find(r => r.id === id);
if (!row) return;
const fullResult = row.full_result_json ? JSON.parse(row.full_result_json) : null;
const modal = document.getElementById('detail-modal');
const body = document.getElementById('modal-body');
let scoresHtml = '';
if (fullResult && fullResult.scores) {
const dims = ['clarity', 'token_efficiency', 'goal_alignment', 'structure', 'vagueness_index'];
scoresHtml = `<div class="modal-scores">${dims.map(d => {
const s = fullResult.scores[d];
const color = getScoreColor(s.score);
return `<div class="modal-score-item">
<span class="score-val" style="color:${color}">${s.score}</span>
<span class="score-name">${formatDimension(d)}</span>
</div>`;
}).join('')}</div>`;
}
let mistakesHtml = '';
if (fullResult && fullResult.mistakes && fullResult.mistakes.length > 0) {
mistakesHtml = `
<div class="modal-prompt-section">
<div class="modal-prompt-label">Mistakes (${fullResult.mistakes.length})</div>
${fullResult.mistakes.map(m => `
<div style="padding:8px 12px;background:var(--bg-input);border-radius:var(--radius-sm);margin-bottom:8px;border-left:3px solid var(--accent-red);">
<div style="font-size:11px;font-weight:600;color:var(--accent-red);text-transform:uppercase;">${formatMistakeType(m.type)}</div>
${m.text ? `<div style="font-family:var(--font-mono);font-size:12px;margin:4px 0;">"${escapeHtml(m.text)}"</div>` : ''}
<div style="font-size:12px;color:var(--accent-green);">💡 ${escapeHtml(m.suggestion)}</div>
</div>
`).join('')}
</div>
`;
}
body.innerHTML = `
<div style="margin-bottom:16px;">
<span class="score-badge ${row.overall_score >= 85 ? 'excellent' : row.overall_score >= 65 ? 'good' : row.overall_score >= 40 ? 'fair' : 'poor'}" style="font-size:16px;padding:6px 16px;">
Overall: ${row.overall_score}
</span>
<span style="margin-left:12px;color:var(--text-muted);font-size:13px;">
${formatTime(row.timestamp)} · ${row.source_agent ? '🤖 ' + row.source_agent : '👤 Human'}
${row.project_id ? ' · 📁 ' + row.project_id : ''}
</span>
</div>
${scoresHtml}
<div class="modal-prompt-section">
<div class="modal-prompt-label">Original Prompt (${row.original_tokens} tokens)</div>
<div class="modal-prompt-text">${escapeHtml(row.original_prompt)}</div>
</div>
${row.rewritten_prompt ? `
<div class="modal-prompt-section">
<div class="modal-prompt-label">Optimized Rewrite (${row.rewritten_tokens} tokens · ${row.token_savings_percent}% saved)</div>
<div class="modal-prompt-text" style="border-color:var(--accent-green);">${escapeHtml(row.rewritten_prompt)}</div>
</div>` : ''}
${mistakesHtml}
`;
modal.classList.remove('hidden');
}
function closeModal(event) {
if (event.target === event.currentTarget) {
document.getElementById('detail-modal').classList.add('hidden');
}
}
function closeDetail() {
document.getElementById('detail-modal').classList.add('hidden');
}
// ── Helpers ───────────────────────────────────────────────────
function formatTime(ts) {
if (!ts) return '—';
try {
const d = new Date(ts);
return d.toLocaleDateString('en-US', { month: 'short', day: 'numeric' }) +
' ' + d.toLocaleTimeString('en-US', { hour: '2-digit', minute: '2-digit' });
} catch { return ts; }
}
function formatMistakeType(type) {
return (type || 'unknown').replace(/_/g, ' ').replace(/\b\w/g, c => c.toUpperCase());
}
function formatDimension(dim) {
return dim.replace(/_/g, ' ');
}
function getScoreColor(score) {
if (score >= 85) return '#10b981';
if (score >= 65) return '#3b82f6';
if (score >= 40) return '#f59e0b';
return '#ef4444';
}
function escapeHtml(text) {
if (!text) return '';
const div = document.createElement('div');
div.textContent = text;
return div.innerHTML;
}

210
dashboard/index.html Normal file
View File

@ -0,0 +1,210 @@
<!DOCTYPE html>
<html lang="en" class="dark">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Dashboard — Prompt Performance Analytics</title>
<meta name="description" content="View analytics on prompt quality across all interactions, agents, and projects.">
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700;800&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;500&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=Material+Symbols+Outlined:wght,FILL@100..700,0..1&display=swap"
rel="stylesheet">
<link rel="stylesheet" href="/dashboard-static/styles.css?v=2">
</head>
<body>
<!-- Header -->
<header class="header">
<div class="header-brand">
<div class="brand-icon">
<span class="material-symbols-outlined">monitoring</span>
</div>
<div>
<h1 class="brand-title">Analytics Dashboard</h1>
<span class="brand-version">Prompt Performance</span>
</div>
</div>
<div class="header-actions">
<a href="/" class="btn btn-ghost">
<span class="material-symbols-outlined">psychology</span>
Analyzer
</a>
<button class="btn btn-ghost" onclick="refreshAll()">
<span class="material-symbols-outlined">refresh</span>
Refresh
</button>
</div>
</header>
<!-- Main Content -->
<main class="dash-content">
<!-- KPI Cards -->
<section class="kpi-grid" id="kpi-section">
<div class="kpi-card">
<div class="kpi-icon kpi-blue"><span class="material-symbols-outlined">analytics</span></div>
<div class="kpi-data">
<div class="kpi-value" id="kpi-total"></div>
<div class="kpi-label">Total Interactions</div>
</div>
</div>
<div class="kpi-card">
<div class="kpi-icon kpi-green"><span class="material-symbols-outlined">speed</span></div>
<div class="kpi-data">
<div class="kpi-value" id="kpi-avg-score"></div>
<div class="kpi-label">Avg Quality Score</div>
</div>
</div>
<div class="kpi-card">
<div class="kpi-icon kpi-cyan"><span class="material-symbols-outlined">savings</span></div>
<div class="kpi-data">
<div class="kpi-value" id="kpi-savings"></div>
<div class="kpi-label">Avg Token Savings</div>
</div>
</div>
<div class="kpi-card">
<div class="kpi-icon kpi-purple"><span class="material-symbols-outlined">check_circle</span></div>
<div class="kpi-data">
<div class="kpi-value" id="kpi-rewrite"></div>
<div class="kpi-label">Rewrite Acceptance</div>
</div>
</div>
<div class="kpi-card">
<div class="kpi-icon kpi-amber"><span class="material-symbols-outlined">group</span></div>
<div class="kpi-data">
<div class="kpi-value" id="kpi-split"></div>
<div class="kpi-label">Human vs Agent</div>
</div>
</div>
<div class="kpi-card">
<div class="kpi-icon kpi-teal"><span class="material-symbols-outlined">token</span></div>
<div class="kpi-data">
<div class="kpi-value" id="kpi-total-tokens"></div>
<div class="kpi-label">Total Tokens</div>
</div>
</div>
<div class="kpi-card">
<div class="kpi-icon kpi-orange"><span class="material-symbols-outlined">avg_pace</span></div>
<div class="kpi-data">
<div class="kpi-value" id="kpi-avg-tokens"></div>
<div class="kpi-label">Avg Tokens / Prompt</div>
</div>
</div>
</section>
<!-- Charts Row -->
<section class="charts-row">
<div class="chart-card">
<div class="chart-header">
<h3><span class="material-symbols-outlined">show_chart</span> Quality Trend</h3>
<div class="trend-filters" id="trend-filters">
<button class="filter-btn" data-hours="1" onclick="setTrendFilter(this)">1h</button>
<button class="filter-btn" data-hours="5" onclick="setTrendFilter(this)">5h</button>
<button class="filter-btn" data-hours="12" onclick="setTrendFilter(this)">12h</button>
<button class="filter-btn" data-days="1" onclick="setTrendFilter(this)">1d</button>
<button class="filter-btn" data-days="10" onclick="setTrendFilter(this)">10d</button>
<button class="filter-btn active" data-days="30" onclick="setTrendFilter(this)">1mo</button>
</div>
</div>
<div class="chart-body">
<canvas id="trend-chart" height="220"></canvas>
<div id="trend-empty" class="chart-empty hidden">No trend data yet — analyze some prompts first!
</div>
</div>
</div>
<div class="chart-card">
<div class="chart-header">
<h3><span class="material-symbols-outlined">pie_chart</span> Common Mistakes</h3>
</div>
<div class="chart-body">
<canvas id="mistakes-chart" height="220"></canvas>
<div id="mistakes-empty" class="chart-empty hidden">No mistakes recorded yet!</div>
</div>
</div>
</section>
<!-- Interaction Feed -->
<section class="feed-section">
<div class="feed-header">
<h2><span class="material-symbols-outlined">list_alt</span> Interaction Feed</h2>
<div class="feed-controls">
<input type="text" id="feed-filter" placeholder="Filter by project..." class="feed-search"
oninput="filterFeed()">
</div>
</div>
<div class="feed-table-wrap">
<table class="feed-table" id="feed-table">
<thead>
<tr>
<th>Time</th>
<th>Source</th>
<th>Project</th>
<th>Prompt</th>
<th>Score</th>
<th>Mistakes</th>
<th>Savings</th>
<th>Rewrite</th>
<th></th>
</tr>
</thead>
<tbody id="feed-body">
<tr>
<td colspan="9" class="feed-empty">No interactions yet — go analyze some prompts!</td>
</tr>
</tbody>
</table>
</div>
<div class="feed-pagination">
<button class="btn btn-outline btn-sm" id="prev-btn" onclick="prevPage()" disabled>← Previous</button>
<span id="page-info" class="page-info">Page 1</span>
<button class="btn btn-outline btn-sm" id="next-btn" onclick="nextPage()" disabled>Next →</button>
</div>
</section>
<!-- Agent Leaderboard -->
<section class="leaderboard-section">
<div class="feed-header">
<h2><span class="material-symbols-outlined">leaderboard</span> Agent Leaderboard</h2>
</div>
<div class="feed-table-wrap">
<table class="feed-table" id="agent-table">
<thead>
<tr>
<th>Rank</th>
<th>Agent</th>
<th>Prompts</th>
<th>Avg Score</th>
<th>Trend</th>
</tr>
</thead>
<tbody id="agent-body">
<tr>
<td colspan="5" class="feed-empty">No agent data yet</td>
</tr>
</tbody>
</table>
</div>
</section>
<!-- Detail Modal -->
<div class="modal-overlay hidden" id="detail-modal" onclick="closeModal(event)">
<div class="modal-content" onclick="event.stopPropagation()">
<div class="modal-header">
<h3>Analysis Detail</h3>
<button class="btn btn-ghost" onclick="closeDetail()">
<span class="material-symbols-outlined">close</span>
</button>
</div>
<div class="modal-body" id="modal-body"></div>
</div>
</div>
</main>
<script src="https://cdn.jsdelivr.net/npm/chart.js@4/dist/chart.umd.min.js"></script>
<script src="/dashboard-static/app.js?v=2"></script>
</body>
</html>

634
dashboard/styles.css Normal file
View File

@ -0,0 +1,634 @@
/* ── Dashboard Design System ───────────────────────────────── */
:root {
--bg-primary: #0a0e1a;
--bg-secondary: #111827;
--bg-card: #1a2235;
--bg-card-hover: #1e2840;
--bg-input: #0f1629;
--border: #2a3452;
--border-hover: #3b4a70;
--text-primary: #f1f5f9;
--text-secondary: #8899b4;
--text-muted: #5a6a85;
--accent-blue: #3b82f6;
--accent-purple: #8b5cf6;
--accent-green: #10b981;
--accent-red: #ef4444;
--accent-amber: #f59e0b;
--accent-cyan: #06b6d4;
--font-sans: 'Inter', -apple-system, sans-serif;
--font-mono: 'JetBrains Mono', monospace;
--radius-sm: 6px;
--radius-md: 10px;
--radius-lg: 16px;
--shadow-card: 0 4px 24px rgba(0, 0, 0, 0.3);
}
*,
*::before,
*::after {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
font-family: var(--font-sans);
background: var(--bg-primary);
color: var(--text-primary);
min-height: 100vh;
-webkit-font-smoothing: antialiased;
line-height: 1.6;
}
::-webkit-scrollbar {
width: 8px;
height: 8px;
}
::-webkit-scrollbar-track {
background: var(--bg-primary);
}
::-webkit-scrollbar-thumb {
background: var(--border);
border-radius: 4px;
}
/* ── Header ────────────────────────────────────────────────── */
.header {
display: flex;
align-items: center;
justify-content: space-between;
padding: 16px 32px;
border-bottom: 1px solid var(--border);
background: var(--bg-secondary);
position: sticky;
top: 0;
z-index: 50;
}
.header-brand {
display: flex;
align-items: center;
gap: 12px;
}
.brand-icon {
width: 40px;
height: 40px;
border-radius: var(--radius-md);
background: linear-gradient(135deg, var(--accent-cyan), var(--accent-blue));
display: flex;
align-items: center;
justify-content: center;
color: white;
}
.brand-title {
font-size: 18px;
font-weight: 700;
}
.brand-version {
font-size: 11px;
color: var(--text-muted);
}
.header-actions {
display: flex;
gap: 8px;
}
/* ── Buttons ───────────────────────────────────────────────── */
.btn {
display: inline-flex;
align-items: center;
gap: 8px;
padding: 10px 20px;
font-size: 14px;
font-weight: 600;
font-family: var(--font-sans);
border-radius: var(--radius-md);
border: none;
cursor: pointer;
transition: all 0.2s;
text-decoration: none;
}
.btn .material-symbols-outlined {
font-size: 18px;
}
.btn-ghost {
background: transparent;
color: var(--text-secondary);
padding: 8px 16px;
}
.btn-ghost:hover {
background: var(--bg-card);
color: var(--text-primary);
}
.btn-outline {
background: transparent;
color: var(--text-secondary);
border: 1px solid var(--border);
}
.btn-outline:hover {
border-color: var(--text-secondary);
color: var(--text-primary);
}
.btn-outline:disabled {
opacity: 0.4;
cursor: not-allowed;
}
.btn-sm {
padding: 6px 14px;
font-size: 13px;
}
/* ── Main ──────────────────────────────────────────────────── */
.dash-content {
max-width: 1300px;
margin: 0 auto;
padding: 28px 24px 64px;
}
/* ── KPI Cards ─────────────────────────────────────────────── */
.kpi-grid {
display: grid;
grid-template-columns: repeat(4, 1fr);
gap: 16px;
}
.kpi-card {
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 20px;
display: flex;
align-items: center;
gap: 14px;
transition: border-color 0.2s, transform 0.2s;
box-shadow: var(--shadow-card);
}
.kpi-card:hover {
border-color: var(--accent-blue);
transform: translateY(-2px);
}
.kpi-icon {
width: 44px;
height: 44px;
border-radius: var(--radius-md);
display: flex;
align-items: center;
justify-content: center;
flex-shrink: 0;
}
.kpi-icon .material-symbols-outlined {
font-size: 22px;
color: white;
}
.kpi-blue {
background: linear-gradient(135deg, #3b82f6, #2563eb);
}
.kpi-green {
background: linear-gradient(135deg, #10b981, #059669);
}
.kpi-cyan {
background: linear-gradient(135deg, #06b6d4, #0891b2);
}
.kpi-purple {
background: linear-gradient(135deg, #8b5cf6, #7c3aed);
}
.kpi-amber {
background: linear-gradient(135deg, #f59e0b, #d97706);
}
.kpi-red {
background: linear-gradient(135deg, #ef4444, #dc2626);
}
.kpi-teal {
background: linear-gradient(135deg, #14b8a6, #0d9488);
}
.kpi-orange {
background: linear-gradient(135deg, #f97316, #ea580c);
}
.kpi-value {
font-size: 22px;
font-weight: 800;
line-height: 1.2;
}
.kpi-label {
font-size: 12px;
color: var(--text-secondary);
font-weight: 500;
}
.kpi-hint {
font-size: 10px;
color: var(--text-muted);
margin-top: 2px;
line-height: 1.3;
}
/* ── Charts ────────────────────────────────────────────────── */
.charts-row {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 16px;
margin-top: 24px;
}
.chart-card {
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
box-shadow: var(--shadow-card);
overflow: hidden;
}
.chart-header {
padding: 18px 22px 0;
display: flex;
align-items: center;
justify-content: space-between;
}
.chart-header h3 {
display: flex;
align-items: center;
gap: 8px;
font-size: 15px;
font-weight: 600;
}
.chart-header h3 .material-symbols-outlined {
font-size: 20px;
color: var(--accent-blue);
}
.trend-filters {
display: flex;
gap: 4px;
}
.filter-btn {
background: var(--bg-input);
border: 1px solid var(--border);
color: var(--text-muted);
font-family: var(--font-sans);
font-size: 11px;
font-weight: 600;
padding: 4px 10px;
border-radius: var(--radius-sm);
cursor: pointer;
transition: all 0.2s;
}
.filter-btn:hover {
border-color: var(--accent-blue);
color: var(--text-primary);
}
.filter-btn.active {
background: var(--accent-blue);
border-color: var(--accent-blue);
color: white;
}
.chart-body {
padding: 16px 22px 22px;
}
.chart-empty {
text-align: center;
color: var(--text-muted);
padding: 40px 20px;
font-size: 14px;
}
/* ── Feed / Table ──────────────────────────────────────────── */
.feed-section,
.leaderboard-section {
margin-top: 24px;
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
box-shadow: var(--shadow-card);
overflow: hidden;
}
.feed-header {
display: flex;
align-items: center;
justify-content: space-between;
padding: 18px 22px;
border-bottom: 1px solid var(--border);
}
.feed-header h2 {
display: flex;
align-items: center;
gap: 10px;
font-size: 16px;
font-weight: 600;
}
.feed-header h2 .material-symbols-outlined {
font-size: 20px;
color: var(--accent-blue);
}
.feed-search {
background: var(--bg-input);
border: 1px solid var(--border);
border-radius: var(--radius-md);
padding: 8px 14px;
font-family: var(--font-sans);
font-size: 13px;
color: var(--text-primary);
width: 220px;
outline: none;
transition: border-color 0.2s;
}
.feed-search:focus {
border-color: var(--accent-blue);
}
.feed-table-wrap {
overflow-x: auto;
}
.feed-table {
width: 100%;
border-collapse: collapse;
font-size: 13px;
}
.feed-table th {
text-align: left;
padding: 12px 16px;
font-weight: 600;
font-size: 11px;
text-transform: uppercase;
letter-spacing: 0.5px;
color: var(--text-muted);
background: var(--bg-input);
border-bottom: 1px solid var(--border);
white-space: nowrap;
}
.feed-table td {
padding: 12px 16px;
border-bottom: 1px solid rgba(42, 52, 82, 0.5);
color: var(--text-secondary);
white-space: nowrap;
}
.feed-table tbody tr {
transition: background 0.15s;
}
.feed-table tbody tr:hover {
background: var(--bg-card-hover);
}
.feed-empty {
text-align: center;
padding: 40px !important;
color: var(--text-muted) !important;
white-space: normal !important;
}
/* Score badge in table */
.score-badge {
display: inline-block;
padding: 3px 10px;
border-radius: 999px;
font-weight: 700;
font-size: 12px;
}
.score-badge.excellent {
background: rgba(16, 185, 129, .15);
color: var(--accent-green);
}
.score-badge.good {
background: rgba(59, 130, 246, .15);
color: var(--accent-blue);
}
.score-badge.fair {
background: rgba(245, 158, 11, .15);
color: var(--accent-amber);
}
.score-badge.poor {
background: rgba(239, 68, 68, .15);
color: var(--accent-red);
}
.source-badge {
display: inline-flex;
align-items: center;
gap: 4px;
font-size: 12px;
}
.view-btn {
background: transparent;
border: 1px solid var(--border);
color: var(--accent-blue);
font-size: 12px;
padding: 4px 10px;
border-radius: var(--radius-sm);
cursor: pointer;
font-family: var(--font-sans);
transition: all 0.2s;
}
.view-btn:hover {
background: var(--accent-blue);
color: white;
}
.feed-pagination {
display: flex;
align-items: center;
justify-content: center;
gap: 16px;
padding: 16px;
border-top: 1px solid var(--border);
}
.page-info {
font-size: 13px;
color: var(--text-muted);
}
/* ── Modal ─────────────────────────────────────────────────── */
.modal-overlay {
position: fixed;
inset: 0;
background: rgba(0, 0, 0, 0.7);
display: flex;
align-items: center;
justify-content: center;
z-index: 100;
backdrop-filter: blur(4px);
}
.modal-content {
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
width: 90%;
max-width: 800px;
max-height: 80vh;
overflow-y: auto;
box-shadow: 0 20px 60px rgba(0, 0, 0, 0.5);
}
.modal-header {
display: flex;
align-items: center;
justify-content: space-between;
padding: 18px 24px;
border-bottom: 1px solid var(--border);
}
.modal-header h3 {
font-size: 16px;
}
.modal-body {
padding: 24px;
}
.modal-scores {
display: grid;
grid-template-columns: repeat(5, 1fr);
gap: 12px;
margin-bottom: 20px;
}
.modal-score-item {
text-align: center;
padding: 12px;
background: var(--bg-input);
border-radius: var(--radius-md);
}
.modal-score-item .score-val {
font-size: 24px;
font-weight: 800;
display: block;
}
.modal-score-item .score-name {
font-size: 11px;
color: var(--text-muted);
text-transform: uppercase;
letter-spacing: 0.5px;
}
.modal-prompt-section {
margin-top: 16px;
}
.modal-prompt-label {
font-size: 12px;
font-weight: 600;
color: var(--text-muted);
text-transform: uppercase;
letter-spacing: 0.5px;
margin-bottom: 6px;
}
.modal-prompt-text {
background: var(--bg-input);
border: 1px solid var(--border);
border-radius: var(--radius-md);
padding: 14px;
font-family: var(--font-mono);
font-size: 13px;
line-height: 1.7;
white-space: pre-wrap;
word-break: break-word;
max-height: 200px;
overflow-y: auto;
}
/* ── Utilities ─────────────────────────────────────────────── */
.hidden {
display: none !important;
}
/* ── Responsive ────────────────────────────────────────────── */
@media (max-width: 1200px) {
.kpi-grid {
grid-template-columns: repeat(3, 1fr);
}
.trend-filters {
flex-wrap: wrap;
}
}
@media (max-width: 768px) {
.header {
padding: 12px 16px;
}
.dash-content {
padding: 16px 12px 48px;
}
.kpi-grid {
grid-template-columns: repeat(2, 1fr);
}
.charts-row {
grid-template-columns: 1fr;
}
.modal-scores {
grid-template-columns: repeat(3, 1fr);
}
}

304
frontend/app.js Normal file
View File

@ -0,0 +1,304 @@
/**
* Prompt Analyzer Frontend Logic
*/
const API_BASE = window.location.origin;
let currentAnalysisId = null;
let currentResult = null;
// ── Character count ───────────────────────────────────────────
const promptInput = document.getElementById('prompt-input');
const charCount = document.getElementById('char-count');
promptInput.addEventListener('input', () => {
const len = promptInput.value.length;
charCount.textContent = `${len.toLocaleString()} characters`;
});
// ── Analyze ───────────────────────────────────────────────────
async function analyzePrompt() {
const prompt = promptInput.value.trim();
if (!prompt) {
showToast('Please enter a prompt to analyze', 'warning');
return;
}
const context = document.getElementById('context-input').value.trim() || null;
const projectId = document.getElementById('project-input').value.trim() || null;
const btn = document.getElementById('analyze-btn');
btn.disabled = true;
hideResults();
hideError();
showLoading();
try {
const response = await fetch(`${API_BASE}/analyze`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
prompt,
context,
project_id: projectId,
}),
});
if (!response.ok) {
const err = await response.json().catch(() => ({}));
throw new Error(err.detail || `HTTP ${response.status}`);
}
const result = await response.json();
currentResult = result;
currentAnalysisId = result.analysis_id;
hideLoading();
renderResults(result);
} catch (error) {
hideLoading();
showError(error.message);
} finally {
btn.disabled = false;
}
}
// ── Render Results ────────────────────────────────────────────
function renderResults(result) {
const section = document.getElementById('results-section');
section.classList.remove('hidden');
// Overall score
const score = result.overall_score;
const scoreCard = document.getElementById('overall-score-card');
const scoreRing = document.getElementById('score-ring');
const scoreValue = document.getElementById('overall-score');
const scoreLabel = document.getElementById('overall-label');
// Remove old class
scoreCard.className = 'overall-score-card';
if (score >= 85) {
scoreCard.classList.add('score-excellent');
scoreLabel.textContent = 'Excellent — This prompt is well-crafted';
} else if (score >= 65) {
scoreCard.classList.add('score-good');
scoreLabel.textContent = 'Good — Minor improvements possible';
} else if (score >= 40) {
scoreCard.classList.add('score-fair');
scoreLabel.textContent = 'Fair — Several issues to address';
} else {
scoreCard.classList.add('score-poor');
scoreLabel.textContent = 'Poor — Major improvements needed';
}
// Animate ring
const circumference = 2 * Math.PI * 52; // r=52
const offset = circumference - (score / 100) * circumference;
scoreRing.style.strokeDashoffset = offset;
// Animate number
animateNumber(scoreValue, score);
// Dimension scores
const dimensions = ['clarity', 'token_efficiency', 'goal_alignment', 'structure', 'vagueness_index'];
dimensions.forEach(dim => {
const data = result.scores[dim];
const numEl = document.querySelector(`[data-dimension="${dim}"]`);
const reasoningEl = document.getElementById(`reasoning-${dim}`);
animateNumber(numEl, data.score);
reasoningEl.textContent = data.reasoning;
// Color the score
const card = document.getElementById(`score-${dim}`);
card.style.borderColor = getScoreColor(data.score);
numEl.style.color = getScoreColor(data.score);
});
// Mistakes
const mistakesList = document.getElementById('mistakes-list');
const mistakeCount = document.getElementById('mistake-count');
const mistakes = result.mistakes || [];
mistakeCount.textContent = `${mistakes.length} issue${mistakes.length !== 1 ? 's' : ''}`;
mistakeCount.className = mistakes.length === 0 ? 'badge badge-success' : 'badge badge-error';
if (mistakes.length === 0) {
mistakesList.innerHTML = `
<div style="text-align:center; padding:20px; color:var(--accent-green);">
<span class="material-symbols-outlined" style="font-size:32px;">check_circle</span>
<p style="margin-top:8px;">No issues found great prompt!</p>
</div>
`;
} else {
mistakesList.innerHTML = mistakes.map(m => `
<div class="mistake-item">
<div class="mistake-icon">
<span class="material-symbols-outlined">${getMistakeIcon(m.type)}</span>
</div>
<div class="mistake-content">
<div class="mistake-type">${formatMistakeType(m.type)}</div>
${m.text ? `<div class="mistake-text">"${escapeHtml(m.text)}"</div>` : ''}
<div class="mistake-suggestion">${escapeHtml(m.suggestion)}</div>
</div>
</div>
`).join('');
}
// Rewrite comparison
document.getElementById('original-text').textContent = result.original_prompt;
document.getElementById('rewritten-text').textContent = result.rewritten_prompt;
const tc = result.token_comparison;
document.getElementById('original-tokens').textContent = `${tc.original_tokens} tokens`;
document.getElementById('rewritten-tokens').textContent = `${tc.rewritten_tokens} tokens`;
document.getElementById('savings-text').textContent = `${tc.savings_percent}% saved`;
// Scroll to results
section.scrollIntoView({ behavior: 'smooth', block: 'start' });
}
// ── Rewrite Choice ────────────────────────────────────────────
async function useRewrite() {
if (!currentResult) return;
// Paste rewritten prompt into the input textarea
promptInput.value = currentResult.rewritten_prompt;
charCount.textContent = `${promptInput.value.length.toLocaleString()} characters`;
// Also copy to clipboard
try {
await navigator.clipboard.writeText(currentResult.rewritten_prompt);
showToast('Rewritten prompt pasted into input & copied to clipboard!', 'success');
} catch {
showToast('Rewritten prompt pasted into input!', 'success');
}
// Scroll back to prompt input
promptInput.scrollIntoView({ behavior: 'smooth', block: 'center' });
promptInput.focus();
if (currentAnalysisId) {
fetch(`${API_BASE}/rewrite-choice`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ analysis_id: currentAnalysisId, used_rewrite: true }),
}).catch(() => { });
}
}
async function keepOriginal() {
if (!currentResult) return;
try {
await navigator.clipboard.writeText(currentResult.original_prompt);
showToast('Original prompt copied to clipboard!', 'success');
} catch {
showToast('Could not copy to clipboard', 'warning');
}
if (currentAnalysisId) {
fetch(`${API_BASE}/rewrite-choice`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ analysis_id: currentAnalysisId, used_rewrite: false }),
}).catch(() => { });
}
}
// ── Helpers ───────────────────────────────────────────────────
function animateNumber(el, target) {
let current = 0;
const duration = 1200;
const stepTime = 16;
const steps = duration / stepTime;
const increment = target / steps;
const timer = setInterval(() => {
current += increment;
if (current >= target) {
current = target;
clearInterval(timer);
}
el.textContent = Math.round(current);
}, stepTime);
}
function getScoreColor(score) {
if (score >= 85) return 'var(--accent-green)';
if (score >= 65) return 'var(--accent-blue)';
if (score >= 40) return 'var(--accent-amber)';
return 'var(--accent-red)';
}
function getMistakeIcon(type) {
const icons = {
vague_instruction: 'blur_on',
missing_context: 'help_outline',
redundancy: 'content_copy',
contradiction: 'sync_problem',
poor_formatting: 'format_align_left',
missing_output_format: 'output',
unclear_scope: 'unfold_more',
overly_complex: 'device_hub',
};
return icons[type] || 'warning';
}
function formatMistakeType(type) {
return type.replace(/_/g, ' ');
}
function escapeHtml(text) {
const div = document.createElement('div');
div.textContent = text;
return div.innerHTML;
}
function showLoading() {
document.getElementById('loading-section').classList.remove('hidden');
}
function hideLoading() {
document.getElementById('loading-section').classList.add('hidden');
}
function hideResults() {
document.getElementById('results-section').classList.add('hidden');
}
function showError(message) {
document.getElementById('error-message').textContent = message;
document.getElementById('error-section').classList.remove('hidden');
}
function hideError() {
document.getElementById('error-section').classList.add('hidden');
}
function showToast(message, type = 'success') {
const toast = document.getElementById('toast');
const icon = document.getElementById('toast-icon');
const msg = document.getElementById('toast-message');
icon.textContent = type === 'success' ? 'check_circle' : 'warning';
icon.style.color = type === 'success' ? 'var(--accent-green)' : 'var(--accent-amber)';
msg.textContent = message;
toast.classList.remove('hidden');
setTimeout(() => toast.classList.add('hidden'), 3000);
}
// Allow Ctrl+Enter to submit
promptInput.addEventListener('keydown', (e) => {
if ((e.ctrlKey || e.metaKey) && e.key === 'Enter') {
analyzePrompt();
}
});

230
frontend/index.html Normal file
View File

@ -0,0 +1,230 @@
<!DOCTYPE html>
<html lang="en" class="dark">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Prompt Analyzer — AI-Powered Prompt Quality Analysis</title>
<meta name="description" content="Analyze your prompts for clarity, efficiency, and structure. Get AI-powered rewrites and quality scores.">
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700;800&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;500&display=swap" rel="stylesheet">
<link href="https://fonts.googleapis.com/css2?family=Material+Symbols+Outlined:wght,FILL@100..700,0..1&display=swap" rel="stylesheet">
<link rel="stylesheet" href="/static/styles.css">
</head>
<body>
<!-- Header -->
<header class="header">
<div class="header-brand">
<div class="brand-icon">
<span class="material-symbols-outlined">psychology</span>
</div>
<div>
<h1 class="brand-title">PromptAnalyzer</h1>
<span class="brand-version">v0.1.0</span>
</div>
</div>
<div class="header-actions">
<a href="/dashboard-ui" class="btn btn-ghost">
<span class="material-symbols-outlined">dashboard</span>
Dashboard
</a>
</div>
</header>
<!-- Main Content -->
<main class="main-content">
<!-- Input Section -->
<section class="input-section">
<div class="input-card">
<div class="card-header">
<h2>
<span class="material-symbols-outlined">edit_note</span>
Enter Your Prompt
</h2>
<span class="badge badge-info">AI Analysis</span>
</div>
<div class="input-fields">
<div class="field-group">
<label for="prompt-input">Prompt</label>
<textarea id="prompt-input" rows="8" placeholder="Paste or type your prompt here..."></textarea>
<div class="field-footer">
<span id="char-count" class="char-count">0 characters</span>
</div>
</div>
<div class="field-row">
<div class="field-group flex-1">
<label for="context-input">Context / Goal <span class="optional">(optional)</span></label>
<input type="text" id="context-input" placeholder="e.g., Customer support chatbot, Code review...">
</div>
<div class="field-group flex-1">
<label for="project-input">Project ID <span class="optional">(optional)</span></label>
<input type="text" id="project-input" placeholder="e.g., customer_support_bot">
</div>
</div>
<button id="analyze-btn" class="btn btn-primary btn-lg" onclick="analyzePrompt()">
<span class="material-symbols-outlined">auto_awesome</span>
Analyze Prompt
</button>
</div>
</div>
</section>
<!-- Loading -->
<section id="loading-section" class="loading-section hidden">
<div class="loading-card">
<div class="spinner"></div>
<p>Analyzing your prompt with Claude Opus...</p>
<p class="loading-sub">Scoring clarity, efficiency, structure, and more</p>
</div>
</section>
<!-- Results Section -->
<section id="results-section" class="results-section hidden">
<!-- Overall Score -->
<div class="overall-score-card" id="overall-score-card">
<div class="overall-score-ring">
<svg viewBox="0 0 120 120">
<circle cx="60" cy="60" r="52" class="score-ring-bg"/>
<circle cx="60" cy="60" r="52" class="score-ring-fill" id="score-ring"/>
</svg>
<span class="overall-score-value" id="overall-score">0</span>
</div>
<div class="overall-score-info">
<h2>Overall Quality Score</h2>
<p id="overall-label" class="score-label"></p>
</div>
</div>
<!-- Score Cards -->
<div class="scores-grid">
<div class="score-card" id="score-clarity">
<div class="score-card-header">
<span class="material-symbols-outlined">visibility</span>
<h3>Clarity</h3>
</div>
<div class="score-value-wrap">
<span class="score-num" data-dimension="clarity">0</span>
<span class="score-max">/100</span>
</div>
<p class="score-reasoning" id="reasoning-clarity"></p>
</div>
<div class="score-card" id="score-token_efficiency">
<div class="score-card-header">
<span class="material-symbols-outlined">token</span>
<h3>Token Efficiency</h3>
</div>
<div class="score-value-wrap">
<span class="score-num" data-dimension="token_efficiency">0</span>
<span class="score-max">/100</span>
</div>
<p class="score-reasoning" id="reasoning-token_efficiency"></p>
</div>
<div class="score-card" id="score-goal_alignment">
<div class="score-card-header">
<span class="material-symbols-outlined">target</span>
<h3>Goal Alignment</h3>
</div>
<div class="score-value-wrap">
<span class="score-num" data-dimension="goal_alignment">0</span>
<span class="score-max">/100</span>
</div>
<p class="score-reasoning" id="reasoning-goal_alignment"></p>
</div>
<div class="score-card" id="score-structure">
<div class="score-card-header">
<span class="material-symbols-outlined">account_tree</span>
<h3>Structure</h3>
</div>
<div class="score-value-wrap">
<span class="score-num" data-dimension="structure">0</span>
<span class="score-max">/100</span>
</div>
<p class="score-reasoning" id="reasoning-structure"></p>
</div>
<div class="score-card" id="score-vagueness_index">
<div class="score-card-header">
<span class="material-symbols-outlined">blur_on</span>
<h3>Vagueness</h3>
</div>
<div class="score-value-wrap">
<span class="score-num" data-dimension="vagueness_index">0</span>
<span class="score-max">/100</span>
</div>
<p class="score-reasoning" id="reasoning-vagueness_index"></p>
</div>
</div>
<!-- Mistakes -->
<div class="mistakes-card" id="mistakes-card">
<div class="card-header">
<h2>
<span class="material-symbols-outlined">bug_report</span>
Identified Mistakes
</h2>
<span class="badge badge-error" id="mistake-count">0 issues</span>
</div>
<div id="mistakes-list" class="mistakes-list"></div>
</div>
<!-- Rewrite Comparison -->
<div class="rewrite-card" id="rewrite-card">
<div class="card-header">
<h2>
<span class="material-symbols-outlined">compare_arrows</span>
Prompt Comparison
</h2>
<div class="token-savings" id="token-savings">
<span class="material-symbols-outlined">savings</span>
<span id="savings-text">0% saved</span>
</div>
</div>
<div class="comparison-grid">
<div class="comparison-col">
<div class="comparison-label">
<span class="dot dot-red"></span>Original
<span class="token-badge" id="original-tokens">0 tokens</span>
</div>
<div class="comparison-text" id="original-text"></div>
</div>
<div class="comparison-col">
<div class="comparison-label">
<span class="dot dot-green"></span>Optimized Rewrite
<span class="token-badge" id="rewritten-tokens">0 tokens</span>
</div>
<div class="comparison-text" id="rewritten-text"></div>
</div>
</div>
<div class="rewrite-actions">
<button class="btn btn-success" onclick="useRewrite()">
<span class="material-symbols-outlined">check_circle</span>
Use Rewritten Prompt
</button>
<button class="btn btn-outline" onclick="keepOriginal()">
<span class="material-symbols-outlined">undo</span>
Keep Original
</button>
</div>
</div>
</section>
<!-- Error -->
<section id="error-section" class="error-section hidden">
<div class="error-card">
<span class="material-symbols-outlined">error</span>
<h3>Analysis Failed</h3>
<p id="error-message">Something went wrong.</p>
<button class="btn btn-outline" onclick="hideError()">Try Again</button>
</div>
</section>
</main>
<!-- Toast -->
<div id="toast" class="toast hidden">
<span class="material-symbols-outlined" id="toast-icon">check_circle</span>
<span id="toast-message">Copied to clipboard!</span>
</div>
<script src="/static/app.js"></script>
</body>
</html>

694
frontend/styles.css Normal file
View File

@ -0,0 +1,694 @@
/* ── Design System ──────────────────────────────────────────── */
:root {
--bg-primary: #0a0e1a;
--bg-secondary: #111827;
--bg-card: #1a2235;
--bg-card-hover: #1e2840;
--bg-input: #0f1629;
--border: #2a3452;
--border-hover: #3b4a70;
--text-primary: #f1f5f9;
--text-secondary:#8899b4;
--text-muted: #5a6a85;
--accent-blue: #3b82f6;
--accent-purple: #8b5cf6;
--accent-green: #10b981;
--accent-red: #ef4444;
--accent-amber: #f59e0b;
--accent-cyan: #06b6d4;
--font-sans: 'Inter', -apple-system, sans-serif;
--font-mono: 'JetBrains Mono', monospace;
--radius-sm: 6px;
--radius-md: 10px;
--radius-lg: 16px;
--radius-xl: 20px;
--shadow-card: 0 4px 24px rgba(0, 0, 0, 0.3);
--shadow-glow: 0 0 20px rgba(59, 130, 246, 0.15);
}
/* ── Reset & Base ──────────────────────────────────────────── */
*, *::before, *::after {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
font-family: var(--font-sans);
background: var(--bg-primary);
color: var(--text-primary);
min-height: 100vh;
-webkit-font-smoothing: antialiased;
line-height: 1.6;
}
/* Custom scrollbar */
::-webkit-scrollbar { width: 8px; height: 8px; }
::-webkit-scrollbar-track { background: var(--bg-primary); }
::-webkit-scrollbar-thumb { background: var(--border); border-radius: 4px; }
::-webkit-scrollbar-thumb:hover { background: var(--border-hover); }
/* ── Header ────────────────────────────────────────────────── */
.header {
display: flex;
align-items: center;
justify-content: space-between;
padding: 16px 32px;
border-bottom: 1px solid var(--border);
background: var(--bg-secondary);
position: sticky;
top: 0;
z-index: 50;
backdrop-filter: blur(12px);
}
.header-brand {
display: flex;
align-items: center;
gap: 12px;
}
.brand-icon {
width: 40px;
height: 40px;
border-radius: var(--radius-md);
background: linear-gradient(135deg, var(--accent-blue), var(--accent-purple));
display: flex;
align-items: center;
justify-content: center;
color: white;
}
.brand-title {
font-size: 18px;
font-weight: 700;
letter-spacing: 0.5px;
}
.brand-version {
font-size: 11px;
color: var(--text-muted);
font-weight: 500;
}
.header-actions {
display: flex;
gap: 8px;
}
/* ── Main Content ──────────────────────────────────────────── */
.main-content {
max-width: 1100px;
margin: 0 auto;
padding: 32px 24px 64px;
}
/* ── Cards ─────────────────────────────────────────────────── */
.input-card, .mistakes-card, .rewrite-card, .loading-card, .error-card {
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 28px;
box-shadow: var(--shadow-card);
}
.card-header {
display: flex;
align-items: center;
justify-content: space-between;
margin-bottom: 20px;
}
.card-header h2 {
display: flex;
align-items: center;
gap: 10px;
font-size: 18px;
font-weight: 600;
}
.card-header h2 .material-symbols-outlined {
font-size: 22px;
color: var(--accent-blue);
}
/* ── Input Section ─────────────────────────────────────────── */
.input-fields {
display: flex;
flex-direction: column;
gap: 16px;
}
.field-group {
display: flex;
flex-direction: column;
gap: 6px;
}
.field-group label {
font-size: 13px;
font-weight: 600;
color: var(--text-secondary);
text-transform: uppercase;
letter-spacing: 0.5px;
}
.optional {
font-weight: 400;
text-transform: none;
color: var(--text-muted);
}
.field-row {
display: flex;
gap: 16px;
}
.flex-1 { flex: 1; }
textarea, input[type="text"] {
background: var(--bg-input);
border: 1px solid var(--border);
border-radius: var(--radius-md);
padding: 14px 16px;
font-family: var(--font-mono);
font-size: 14px;
color: var(--text-primary);
resize: vertical;
transition: border-color 0.2s, box-shadow 0.2s;
outline: none;
width: 100%;
}
textarea:focus, input[type="text"]:focus {
border-color: var(--accent-blue);
box-shadow: 0 0 0 3px rgba(59, 130, 246, 0.15);
}
textarea::placeholder, input::placeholder {
color: var(--text-muted);
}
.field-footer {
display: flex;
justify-content: flex-end;
}
.char-count {
font-size: 12px;
color: var(--text-muted);
font-family: var(--font-mono);
}
/* ── Buttons ───────────────────────────────────────────────── */
.btn {
display: inline-flex;
align-items: center;
gap: 8px;
padding: 10px 20px;
font-size: 14px;
font-weight: 600;
font-family: var(--font-sans);
border-radius: var(--radius-md);
border: none;
cursor: pointer;
transition: all 0.2s;
text-decoration: none;
}
.btn .material-symbols-outlined { font-size: 18px; }
.btn-primary {
background: linear-gradient(135deg, var(--accent-blue), var(--accent-purple));
color: white;
box-shadow: 0 4px 14px rgba(59, 130, 246, 0.3);
}
.btn-primary:hover {
transform: translateY(-1px);
box-shadow: 0 6px 20px rgba(59, 130, 246, 0.4);
}
.btn-primary:disabled {
opacity: 0.5;
cursor: not-allowed;
transform: none;
}
.btn-lg {
padding: 14px 28px;
font-size: 15px;
}
.btn-success {
background: var(--accent-green);
color: white;
}
.btn-success:hover {
background: #0d9668;
transform: translateY(-1px);
}
.btn-outline {
background: transparent;
color: var(--text-secondary);
border: 1px solid var(--border);
}
.btn-outline:hover {
border-color: var(--text-secondary);
color: var(--text-primary);
}
.btn-ghost {
background: transparent;
color: var(--text-secondary);
padding: 8px 16px;
}
.btn-ghost:hover {
background: var(--bg-card);
color: var(--text-primary);
}
/* ── Badges ────────────────────────────────────────────────── */
.badge {
font-size: 12px;
font-weight: 600;
padding: 4px 10px;
border-radius: 999px;
}
.badge-info {
background: rgba(59, 130, 246, 0.15);
color: var(--accent-blue);
}
.badge-error {
background: rgba(239, 68, 68, 0.15);
color: var(--accent-red);
}
.badge-success {
background: rgba(16, 185, 129, 0.15);
color: var(--accent-green);
}
/* ── Loading ───────────────────────────────────────────────── */
.loading-section { margin-top: 32px; }
.loading-card {
text-align: center;
padding: 48px;
}
.spinner {
width: 48px;
height: 48px;
border: 3px solid var(--border);
border-top-color: var(--accent-blue);
border-radius: 50%;
margin: 0 auto 20px;
animation: spin 0.8s linear infinite;
}
@keyframes spin { to { transform: rotate(360deg); } }
.loading-sub {
font-size: 13px;
color: var(--text-muted);
margin-top: 6px;
}
/* ── Overall Score ─────────────────────────────────────────── */
.overall-score-card {
display: flex;
align-items: center;
gap: 24px;
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 28px 32px;
margin-top: 32px;
box-shadow: var(--shadow-card);
}
.overall-score-ring {
position: relative;
width: 100px;
height: 100px;
flex-shrink: 0;
}
.overall-score-ring svg {
width: 100%;
height: 100%;
transform: rotate(-90deg);
}
.score-ring-bg {
fill: none;
stroke: var(--border);
stroke-width: 8;
}
.score-ring-fill {
fill: none;
stroke: var(--accent-blue);
stroke-width: 8;
stroke-linecap: round;
stroke-dasharray: 327;
stroke-dashoffset: 327;
transition: stroke-dashoffset 1.5s ease-out, stroke 0.5s;
}
.overall-score-value {
position: absolute;
inset: 0;
display: flex;
align-items: center;
justify-content: center;
font-size: 28px;
font-weight: 800;
}
.overall-score-info h2 {
font-size: 20px;
font-weight: 700;
}
.score-label {
font-size: 14px;
color: var(--text-secondary);
margin-top: 4px;
}
/* Score colors */
.score-excellent .score-ring-fill { stroke: var(--accent-green); }
.score-good .score-ring-fill { stroke: var(--accent-blue); }
.score-fair .score-ring-fill { stroke: var(--accent-amber); }
.score-poor .score-ring-fill { stroke: var(--accent-red); }
.score-excellent .overall-score-value { color: var(--accent-green); }
.score-good .overall-score-value { color: var(--accent-blue); }
.score-fair .overall-score-value { color: var(--accent-amber); }
.score-poor .overall-score-value { color: var(--accent-red); }
/* ── Score Cards Grid ──────────────────────────────────────── */
.scores-grid {
display: grid;
grid-template-columns: repeat(5, 1fr);
gap: 16px;
margin-top: 24px;
}
.score-card {
background: var(--bg-card);
border: 1px solid var(--border);
border-radius: var(--radius-lg);
padding: 20px;
transition: border-color 0.2s, transform 0.2s;
box-shadow: var(--shadow-card);
}
.score-card:hover {
border-color: var(--accent-blue);
transform: translateY(-2px);
}
.score-card-header {
display: flex;
align-items: center;
gap: 8px;
margin-bottom: 12px;
}
.score-card-header .material-symbols-outlined {
font-size: 20px;
color: var(--accent-blue);
}
.score-card-header h3 {
font-size: 13px;
font-weight: 600;
color: var(--text-secondary);
}
.score-value-wrap {
display: flex;
align-items: baseline;
gap: 2px;
margin-bottom: 10px;
}
.score-num {
font-size: 32px;
font-weight: 800;
line-height: 1;
}
.score-max {
font-size: 14px;
color: var(--text-muted);
font-weight: 500;
}
.score-reasoning {
font-size: 12px;
color: var(--text-secondary);
line-height: 1.5;
}
/* ── Mistakes ──────────────────────────────────────────────── */
.mistakes-card { margin-top: 24px; }
.mistakes-list {
display: flex;
flex-direction: column;
gap: 12px;
}
.mistake-item {
display: flex;
gap: 14px;
padding: 16px;
background: var(--bg-input);
border-radius: var(--radius-md);
border-left: 3px solid var(--accent-red);
}
.mistake-icon {
flex-shrink: 0;
width: 32px;
height: 32px;
border-radius: var(--radius-sm);
background: rgba(239, 68, 68, 0.1);
display: flex;
align-items: center;
justify-content: center;
}
.mistake-icon .material-symbols-outlined {
font-size: 18px;
color: var(--accent-red);
}
.mistake-content { flex: 1; }
.mistake-type {
font-size: 12px;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.5px;
color: var(--accent-red);
margin-bottom: 4px;
}
.mistake-text {
font-size: 13px;
color: var(--text-primary);
font-family: var(--font-mono);
background: rgba(239, 68, 68, 0.08);
padding: 4px 8px;
border-radius: 4px;
display: inline-block;
margin-bottom: 6px;
}
.mistake-suggestion {
font-size: 13px;
color: var(--accent-green);
}
.mistake-suggestion::before {
content: "💡 ";
}
/* ── Rewrite Comparison ────────────────────────────────────── */
.rewrite-card { margin-top: 24px; }
.token-savings {
display: flex;
align-items: center;
gap: 6px;
font-size: 14px;
font-weight: 600;
color: var(--accent-green);
}
.token-savings .material-symbols-outlined { font-size: 18px; }
.comparison-grid {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 16px;
margin-bottom: 20px;
}
.comparison-label {
display: flex;
align-items: center;
gap: 8px;
font-size: 13px;
font-weight: 600;
color: var(--text-secondary);
margin-bottom: 10px;
}
.dot {
width: 8px;
height: 8px;
border-radius: 50%;
}
.dot-red { background: var(--accent-red); }
.dot-green { background: var(--accent-green); }
.token-badge {
font-size: 11px;
font-weight: 500;
background: var(--bg-input);
padding: 2px 8px;
border-radius: 999px;
color: var(--text-muted);
margin-left: auto;
}
.comparison-text {
background: var(--bg-input);
border: 1px solid var(--border);
border-radius: var(--radius-md);
padding: 16px;
font-family: var(--font-mono);
font-size: 13px;
line-height: 1.7;
white-space: pre-wrap;
word-break: break-word;
max-height: 300px;
overflow-y: auto;
color: var(--text-primary);
}
.rewrite-actions {
display: flex;
gap: 12px;
}
/* ── Error ─────────────────────────────────────────────────── */
.error-section { margin-top: 32px; }
.error-card {
text-align: center;
padding: 40px;
}
.error-card .material-symbols-outlined {
font-size: 48px;
color: var(--accent-red);
margin-bottom: 12px;
}
.error-card h3 {
font-size: 18px;
margin-bottom: 8px;
}
.error-card p {
color: var(--text-secondary);
font-size: 14px;
margin-bottom: 20px;
}
/* ── Toast ─────────────────────────────────────────────────── */
.toast {
position: fixed;
bottom: 32px;
right: 32px;
display: flex;
align-items: center;
gap: 10px;
padding: 14px 20px;
background: var(--bg-card);
border: 1px solid var(--accent-green);
border-radius: var(--radius-md);
box-shadow: 0 8px 32px rgba(0, 0, 0, 0.4);
font-size: 14px;
font-weight: 500;
z-index: 100;
animation: toastIn 0.3s ease-out;
}
.toast .material-symbols-outlined {
color: var(--accent-green);
}
@keyframes toastIn {
from { transform: translateY(20px); opacity: 0; }
to { transform: translateY(0); opacity: 1; }
}
/* ── Utilities ─────────────────────────────────────────────── */
.hidden { display: none !important; }
.results-section { animation: fadeIn 0.4s ease-out; }
@keyframes fadeIn {
from { opacity: 0; transform: translateY(12px); }
to { opacity: 1; transform: translateY(0); }
}
/* ── Responsive ────────────────────────────────────────────── */
@media (max-width: 900px) {
.scores-grid {
grid-template-columns: repeat(3, 1fr);
}
}
@media (max-width: 680px) {
.header { padding: 12px 16px; }
.main-content { padding: 16px 12px 48px; }
.scores-grid { grid-template-columns: repeat(2, 1fr); }
.comparison-grid { grid-template-columns: 1fr; }
.field-row { flex-direction: column; }
.overall-score-card { flex-direction: column; text-align: center; }
.rewrite-actions { flex-direction: column; }
}

View File

@ -0,0 +1,14 @@
{
"mcpServers": {
"prompt-analyzer": {
"command": "/Users/ananya/Downloads/stitch_prompt_performance_analytics_dashboard/.venv/bin/python",
"args": [
"-m",
"mcp_server.server"
],
"env": {
"PYTHONPATH": "/Users/ananya/Downloads/stitch_prompt_performance_analytics_dashboard"
}
}
}
}

165
mcp_server/server.py Normal file
View File

@ -0,0 +1,165 @@
"""
MCP Server Exposes the Prompt Analyzer as discoverable tools
for enterprise multi-agent systems.
Tools:
- analyze_prompt: Analyze a prompt for quality
- get_analysis_history: Retrieve past analyses
"""
import asyncio
import json
import logging
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
from prompt_analyzer import PromptAnalyzer
from analytics_reporter.reporter import AnalyticsReporter
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Create MCP server
server = Server("prompt-analyzer")
# Shared instances
analyzer = PromptAnalyzer()
reporter = AnalyticsReporter()
@server.list_tools()
async def list_tools() -> list[Tool]:
"""Advertise available tools to MCP clients."""
return [
Tool(
name="analyze_prompt",
description=(
"Analyze a prompt for quality across 5 dimensions: "
"clarity, token efficiency, goal alignment, structure, "
"and vagueness. Returns scores (0-100), identified mistakes "
"with suggestions, an optimized rewrite, and token savings. "
"Supports project-aware analysis for context-specific recommendations."
),
inputSchema={
"type": "object",
"properties": {
"prompt": {
"type": "string",
"description": "The prompt to analyze",
},
"context": {
"type": "string",
"description": "Optional goal or context for the prompt",
},
"project_id": {
"type": "string",
"description": "Project ID for context-aware analysis (isolated per project)",
},
"source_agent": {
"type": "string",
"description": "Name of the agent that authored this prompt",
},
"target_agent": {
"type": "string",
"description": "Name of the agent this prompt is directed to",
},
},
"required": ["prompt"],
},
),
Tool(
name="get_analysis_history",
description=(
"Retrieve past prompt analyses. Can filter by project. "
"Useful for understanding prompt quality trends."
),
inputSchema={
"type": "object",
"properties": {
"limit": {
"type": "integer",
"description": "Max results to return (default 10)",
"default": 10,
},
"project_id": {
"type": "string",
"description": "Filter by project ID",
},
},
},
),
]
@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
"""Handle tool calls from MCP clients."""
if name == "analyze_prompt":
prompt = arguments.get("prompt", "")
if not prompt:
return [TextContent(type="text", text="Error: prompt is required")]
try:
# All MCP calls are agent interactions — default source_agent if not provided
source_agent = arguments.get("source_agent") or "mcp-client"
result = await analyzer.analyze(
prompt=prompt,
context=arguments.get("context"),
project_id=arguments.get("project_id"),
source_agent=source_agent,
target_agent=arguments.get("target_agent"),
)
# Store via Agent 2
await reporter.initialize()
analysis_id = await reporter.report(result)
response = result.model_dump(mode="json")
response["analysis_id"] = analysis_id
return [
TextContent(
type="text",
text=json.dumps(response, indent=2, default=str),
)
]
except Exception as e:
logger.error("analyze_prompt failed: %s", e, exc_info=True)
return [TextContent(type="text", text=f"Error: {str(e)}")]
elif name == "get_analysis_history":
try:
await reporter.initialize()
data = await reporter.get_interactions(
limit=arguments.get("limit", 10),
project_id=arguments.get("project_id"),
)
return [
TextContent(
type="text",
text=json.dumps(data, indent=2, default=str),
)
]
except Exception as e:
logger.error("get_analysis_history failed: %s", e, exc_info=True)
return [TextContent(type="text", text=f"Error: {str(e)}")]
return [TextContent(type="text", text=f"Unknown tool: {name}")]
async def main():
"""Run the MCP server over stdio."""
await reporter.initialize()
logger.info("MCP Server starting (stdio mode)")
async with stdio_server() as (read_stream, write_stream):
await server.run(read_stream, write_stream, server.create_initialization_options())
if __name__ == "__main__":
asyncio.run(main())

View File

@ -0,0 +1,32 @@
"""
Prompt Analyzer AI-powered prompt quality analysis.
Usage:
from prompt_analyzer import PromptAnalyzer
analyzer = PromptAnalyzer()
result = await analyzer.analyze("Your prompt here")
"""
from prompt_analyzer.analyzer import PromptAnalyzer
from prompt_analyzer.models import (
AnalysisResult,
AnalyzeRequest,
Scores,
Score,
Mistake,
TokenComparison,
AnalysisMetadata,
)
__version__ = "0.1.0"
__all__ = [
"PromptAnalyzer",
"AnalysisResult",
"AnalyzeRequest",
"Scores",
"Score",
"Mistake",
"TokenComparison",
"AnalysisMetadata",
]

321
prompt_analyzer/analyzer.py Normal file
View File

@ -0,0 +1,321 @@
"""
Core Prompt Analyzer agent.
Takes a prompt, calls Claude via the Anthropic API, and returns
structured analysis: scores, mistakes, rewritten prompt, and token comparison.
"""
import json
import logging
from typing import Optional
import tiktoken
from prompt_analyzer.anthropic_client import AnthropicClient
from prompt_analyzer.context_store import ContextStore
from prompt_analyzer.models import (
AnalysisResult,
AnalysisMetadata,
Scores,
Score,
Mistake,
TokenComparison,
)
logger = logging.getLogger(__name__)
SYSTEM_PROMPT = """You are an expert Prompt Quality Analyzer. Your job is to analyze a given prompt and return a structured JSON assessment.
ANALYZE THE PROMPT ON THESE 5 DIMENSIONS (score each 0-100):
1. **Clarity** (0-100): How unambiguous is the prompt? Can it be misinterpreted? Are instructions precise?
2. **Token Efficiency** (0-100): How concise is the prompt? Are there redundant words, repeated instructions, or unnecessary filler? Higher = more efficient.
3. **Goal Alignment** (0-100): Does the prompt clearly state what output is expected? Is the desired result format, length, and style specified?
4. **Structure** (0-100): Is the prompt well-organized? Does it have logical flow, proper sections, and clear instruction ordering?
5. **Vagueness Index** (0-100): How many vague/ambiguous phrases exist? ("make it good", "do something nice", "be creative"). Score 0 = extremely vague, 100 = no vagueness at all.
ALSO:
- **Identify specific mistakes** in the prompt. For each mistake, provide:
- `type`: one of: vague_instruction, missing_context, redundancy, contradiction, poor_formatting, missing_output_format, unclear_scope, overly_complex
- `text`: the exact problematic text from the prompt (null if the mistake is about something missing)
- `suggestion`: a concrete fix
- **Rewrite the prompt** to be optimal maximum clarity, minimum tokens, best structure. The rewrite should accomplish the exact same goal as the original.
{project_context}
RESPOND WITH ONLY VALID JSON in this exact format (no markdown, no code fences, just the JSON):
{{
"overall_score": <number 0-100, weighted average: clarity 25%, token_efficiency 20%, goal_alignment 25%, structure 15%, vagueness_index 15%>,
"scores": {{
"clarity": {{ "score": <0-100>, "reasoning": "<1-2 sentences>" }},
"token_efficiency": {{ "score": <0-100>, "reasoning": "<1-2 sentences>" }},
"goal_alignment": {{ "score": <0-100>, "reasoning": "<1-2 sentences>" }},
"structure": {{ "score": <0-100>, "reasoning": "<1-2 sentences>" }},
"vagueness_index": {{ "score": <0-100>, "reasoning": "<1-2 sentences>" }}
}},
"mistakes": [
{{ "type": "<type>", "text": "<problematic text or null>", "suggestion": "<fix>" }}
],
"rewritten_prompt": "<the optimized version of the prompt>"
}}"""
class PromptAnalyzer:
"""
AI-powered prompt quality analyzer.
Usage:
analyzer = PromptAnalyzer()
result = await analyzer.analyze("Your prompt here")
For context-aware analysis (enterprise):
result = await analyzer.analyze(
prompt="...",
project_id="customer_support",
source_agent="planner",
)
"""
def __init__(self):
self.llm = AnthropicClient()
self.context_store = ContextStore()
# Use cl100k_base tokenizer (closest to Claude's tokenization)
try:
self.tokenizer = tiktoken.get_encoding("cl100k_base")
except Exception:
self.tokenizer = None
logger.warning("tiktoken not available, token counts will be estimated")
def _count_tokens(self, text: str) -> int:
"""Count tokens in text."""
if self.tokenizer:
return len(self.tokenizer.encode(text))
# Rough estimate: ~4 chars per token
return len(text) // 4
async def analyze(
self,
prompt: str,
context: Optional[str] = None,
project_id: Optional[str] = None,
source_agent: Optional[str] = None,
target_agent: Optional[str] = None,
) -> AnalysisResult:
"""
Analyze a prompt and return structured quality assessment.
Args:
prompt: The prompt to analyze
context: Optional goal/context for the prompt
project_id: Optional project ID for context-aware analysis
source_agent: Optional agent that authored this prompt
target_agent: Optional agent this prompt is directed to
Returns:
AnalysisResult with scores, mistakes, and rewritten prompt
"""
logger.info(
"Analyzing prompt (length=%d, project=%s, agent=%s)",
len(prompt),
project_id,
source_agent,
)
# Build context-aware system prompt
project_context = self.context_store.build_context_summary(
project_id, source_agent
)
system_prompt = SYSTEM_PROMPT.format(project_context=project_context)
# Build user message
user_message = self._build_user_message(prompt, context)
# Call Gemini
raw_response = await self.llm.invoke(system_prompt, user_message)
# Parse the JSON response
result = self._parse_response(raw_response, prompt, project_id, source_agent, target_agent)
# Update context store (if project-aware)
if project_id:
analysis_dict = result.model_dump()
self.context_store.append_history(project_id, analysis_dict)
self.context_store.update_patterns(project_id, analysis_dict)
if source_agent:
self.context_store.update_agent_context(
project_id, source_agent, analysis_dict
)
return result
def _build_user_message(
self, prompt: str, context: Optional[str] = None
) -> str:
"""Build the user message sent to Claude."""
parts = ["PROMPT TO ANALYZE:\n---"]
parts.append(prompt)
parts.append("---")
if context:
parts.append(f"\nCONTEXT/GOAL: {context}")
return "\n".join(parts)
def _extract_json(self, raw: str) -> str:
"""Extract JSON from Claude's response, handling various formatting."""
import re
cleaned = raw.strip()
# 1. Remove markdown code fences (```json ... ``` or ``` ... ```)
fence_pattern = re.compile(r'```(?:json)?\s*\n?(.*?)\n?\s*```', re.DOTALL)
match = fence_pattern.search(cleaned)
if match:
cleaned = match.group(1).strip()
# 2. If the response doesn't start with {, try to find the JSON object
if not cleaned.startswith("{"):
brace_start = cleaned.find("{")
if brace_start != -1:
cleaned = cleaned[brace_start:]
# 3. Find the matching closing brace
if cleaned.startswith("{"):
depth = 0
in_string = False
escape = False
end_pos = len(cleaned)
for i, ch in enumerate(cleaned):
if escape:
escape = False
continue
if ch == '\\' and in_string:
escape = True
continue
if ch == '"' and not escape:
in_string = not in_string
continue
if in_string:
continue
if ch == '{':
depth += 1
elif ch == '}':
depth -= 1
if depth == 0:
end_pos = i + 1
break
cleaned = cleaned[:end_pos]
# 4. Fix trailing commas before } or ] (common LLM mistake)
cleaned = re.sub(r',\s*([}\]])', r'\1', cleaned)
return cleaned
def _parse_response(
self,
raw: str,
original_prompt: str,
project_id: Optional[str],
source_agent: Optional[str],
target_agent: Optional[str],
) -> AnalysisResult:
"""Parse Claude's JSON response into an AnalysisResult."""
cleaned = self._extract_json(raw)
try:
data = json.loads(cleaned)
except json.JSONDecodeError as e:
logger.error("Failed to parse Claude response as JSON: %s", e)
logger.error("Raw response (first 500 chars): %s", raw[:500])
logger.error("Cleaned response (first 500 chars): %s", cleaned[:500])
# Return a fallback result
return self._fallback_result(original_prompt, str(e), project_id, source_agent, target_agent)
# Build structured result
try:
scores_data = data.get("scores", {})
scores = Scores(
clarity=Score(**scores_data.get("clarity", {"score": 0, "reasoning": "N/A"})),
token_efficiency=Score(**scores_data.get("token_efficiency", {"score": 0, "reasoning": "N/A"})),
goal_alignment=Score(**scores_data.get("goal_alignment", {"score": 0, "reasoning": "N/A"})),
structure=Score(**scores_data.get("structure", {"score": 0, "reasoning": "N/A"})),
vagueness_index=Score(**scores_data.get("vagueness_index", {"score": 0, "reasoning": "N/A"})),
)
mistakes = [
Mistake(**m) for m in data.get("mistakes", [])
]
rewritten = data.get("rewritten_prompt", original_prompt)
original_tokens = self._count_tokens(original_prompt)
rewritten_tokens = self._count_tokens(rewritten)
savings = (
round((1 - rewritten_tokens / original_tokens) * 100, 1)
if original_tokens > 0
else 0.0
)
return AnalysisResult(
original_prompt=original_prompt,
overall_score=data.get("overall_score", 0),
scores=scores,
mistakes=mistakes,
rewritten_prompt=rewritten,
token_comparison=TokenComparison(
original_tokens=original_tokens,
rewritten_tokens=rewritten_tokens,
savings_percent=savings,
),
metadata=AnalysisMetadata(
project_id=project_id,
source_agent=source_agent,
target_agent=target_agent,
mode="agent" if source_agent else "human",
),
)
except Exception as e:
logger.error("Failed to build AnalysisResult: %s", e)
return self._fallback_result(original_prompt, str(e), project_id, source_agent, target_agent)
def _fallback_result(
self,
prompt: str,
error: str,
project_id: Optional[str],
source_agent: Optional[str],
target_agent: Optional[str],
) -> AnalysisResult:
"""Return a fallback result when parsing fails."""
fallback_score = Score(score=0, reasoning=f"Analysis failed: {error}")
return AnalysisResult(
original_prompt=prompt,
overall_score=0,
scores=Scores(
clarity=fallback_score,
token_efficiency=fallback_score,
goal_alignment=fallback_score,
structure=fallback_score,
vagueness_index=fallback_score,
),
mistakes=[
Mistake(
type="analysis_error",
text=None,
suggestion=f"Re-run analysis. Error: {error}",
)
],
rewritten_prompt=prompt,
token_comparison=TokenComparison(
original_tokens=self._count_tokens(prompt),
rewritten_tokens=self._count_tokens(prompt),
savings_percent=0.0,
),
metadata=AnalysisMetadata(
project_id=project_id,
source_agent=source_agent,
target_agent=target_agent,
mode="agent" if source_agent else "human",
),
)

View File

@ -0,0 +1,64 @@
"""Anthropic Claude client for prompt analysis."""
import logging
import anthropic
from prompt_analyzer.config import (
ANTHROPIC_API_KEY,
ANTHROPIC_MODEL,
LLM_MAX_TOKENS,
LLM_TEMPERATURE,
)
logger = logging.getLogger(__name__)
class AnthropicClient:
"""Wrapper around the Anthropic Messages API."""
def __init__(self):
if not ANTHROPIC_API_KEY:
raise ValueError("ANTHROPIC_API_KEY is not set in .env")
self.client = anthropic.Anthropic(api_key=ANTHROPIC_API_KEY)
self.model = ANTHROPIC_MODEL
self.max_tokens = LLM_MAX_TOKENS
self.temperature = LLM_TEMPERATURE
async def invoke(self, system_prompt: str, user_message: str) -> str:
"""
Send a message to Claude and return the text response.
Uses the Anthropic Messages API directly.
"""
logger.info("Invoking Anthropic model=%s", self.model)
try:
message = self.client.messages.create(
model=self.model,
max_tokens=self.max_tokens,
temperature=self.temperature,
system=system_prompt,
messages=[
{"role": "user", "content": user_message},
],
)
# Extract text from the response
result = ""
for block in message.content:
if block.type == "text":
result += block.text
logger.info("Anthropic response received, length=%d chars", len(result))
return result
except anthropic.AuthenticationError:
logger.error("Anthropic authentication failed — check your API key")
raise
except anthropic.RateLimitError:
logger.error("Anthropic rate limit hit")
raise
except Exception as e:
logger.error("Anthropic invocation failed: %s", str(e))
raise

29
prompt_analyzer/config.py Normal file
View File

@ -0,0 +1,29 @@
"""Configuration for the Prompt Analyzer."""
import os
from pathlib import Path
from dotenv import load_dotenv
# Load .env from the project root (two levels up from this file)
_project_root = Path(__file__).resolve().parent.parent
load_dotenv(_project_root / ".env")
# Anthropic
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY")
ANTHROPIC_MODEL = os.getenv("ANTHROPIC_MODEL", "claude-sonnet-4-20250514")
# LLM parameters
LLM_MAX_TOKENS = int(os.getenv("LLM_MAX_TOKENS", "4096"))
LLM_TEMPERATURE = float(os.getenv("LLM_TEMPERATURE", "0.3"))
# Context store
CONTEXT_STORE_DIR = os.getenv(
"CONTEXT_STORE_DIR",
os.path.join(os.path.dirname(os.path.dirname(__file__)), "context_store"),
)
# Analytics DB
ANALYTICS_DB_PATH = os.getenv(
"ANALYTICS_DB_PATH",
os.path.join(os.path.dirname(os.path.dirname(__file__)), "analytics.db"),
)

View File

@ -0,0 +1,272 @@
"""
Per-project context store with strict isolation.
Each project gets its own directory. Context from one project
is never read or written by another project's operations.
"""
import json
import os
import logging
from datetime import datetime, timezone
from typing import Optional
from prompt_analyzer.config import CONTEXT_STORE_DIR
logger = logging.getLogger(__name__)
class ContextStore:
"""Manages per-project context with strict isolation."""
def __init__(self, base_dir: str = CONTEXT_STORE_DIR):
self.base_dir = base_dir
# ── Path helpers (always scoped to a project) ──────────────
def _project_dir(self, project_id: str) -> str:
"""Get the isolated directory for a specific project."""
safe_name = project_id.replace("/", "_").replace("..", "_")
return os.path.join(self.base_dir, f"project_{safe_name}")
def _profile_path(self, project_id: str) -> str:
return os.path.join(self._project_dir(project_id), "profile.json")
def _history_path(self, project_id: str) -> str:
return os.path.join(self._project_dir(project_id), "history.jsonl")
def _patterns_path(self, project_id: str) -> str:
return os.path.join(self._project_dir(project_id), "patterns.json")
def _agent_path(self, project_id: str, agent_id: str) -> str:
safe_agent = agent_id.replace("/", "_").replace("..", "_")
return os.path.join(
self._project_dir(project_id), "agents", f"{safe_agent}.json"
)
def _ensure_dir(self, path: str) -> None:
os.makedirs(os.path.dirname(path), exist_ok=True)
# ── Project profile ────────────────────────────────────────
def get_project_profile(self, project_id: str) -> dict:
"""Load project profile. Returns empty dict if project is new."""
path = self._profile_path(project_id)
if not os.path.exists(path):
return {}
with open(path, "r") as f:
return json.load(f)
def save_project_profile(self, project_id: str, profile: dict) -> None:
"""Save or update a project profile."""
path = self._profile_path(project_id)
self._ensure_dir(path)
with open(path, "w") as f:
json.dump(profile, f, indent=2, default=str)
logger.info("Saved profile for project=%s", project_id)
# ── Analysis history (append-only log) ─────────────────────
def append_history(self, project_id: str, analysis: dict) -> None:
"""Append an analysis result to the project's history."""
path = self._history_path(project_id)
self._ensure_dir(path)
entry = {
**analysis,
"_stored_at": datetime.now(timezone.utc).isoformat(),
}
with open(path, "a") as f:
f.write(json.dumps(entry, default=str) + "\n")
logger.debug("Appended history for project=%s", project_id)
def get_recent_history(
self, project_id: str, limit: int = 20
) -> list[dict]:
"""Get the most recent analyses for a project."""
path = self._history_path(project_id)
if not os.path.exists(path):
return []
entries = []
with open(path, "r") as f:
for line in f:
line = line.strip()
if line:
try:
entries.append(json.loads(line))
except json.JSONDecodeError:
continue
return entries[-limit:]
# ── Learned patterns ───────────────────────────────────────
def get_patterns(self, project_id: str) -> dict:
"""
Get learned patterns for a project:
- common_mistakes: list of frequently seen mistake types
- best_templates: high-scoring prompt excerpts
- preferred_style: inferred preferences
"""
path = self._patterns_path(project_id)
if not os.path.exists(path):
return {"common_mistakes": [], "best_templates": [], "preferred_style": ""}
with open(path, "r") as f:
return json.load(f)
def update_patterns(self, project_id: str, analysis: dict) -> None:
"""Update learned patterns based on a new analysis result."""
patterns = self.get_patterns(project_id)
# Track mistake frequencies
mistake_types = [m.get("type", "unknown") for m in analysis.get("mistakes", [])]
existing_mistakes = {m["type"]: m.get("count", 0) for m in patterns.get("common_mistakes", []) if isinstance(m, dict)}
for mt in mistake_types:
existing_mistakes[mt] = existing_mistakes.get(mt, 0) + 1
patterns["common_mistakes"] = [
{"type": k, "count": v}
for k, v in sorted(existing_mistakes.items(), key=lambda x: -x[1])
][:10] # keep top 10
# Track best-scoring prompts as templates
overall_score = analysis.get("overall_score", 0)
if overall_score >= 85:
rewritten = analysis.get("rewritten_prompt", "")
if rewritten:
templates = patterns.get("best_templates", [])
templates.append(
{"prompt": rewritten[:500], "score": overall_score}
)
# Keep top 5 by score
templates.sort(key=lambda x: -x["score"])
patterns["best_templates"] = templates[:5]
path = self._patterns_path(project_id)
self._ensure_dir(path)
with open(path, "w") as f:
json.dump(patterns, f, indent=2, default=str)
# ── Per-agent context (within a project) ───────────────────
def get_agent_context(
self, project_id: str, agent_id: str
) -> dict:
"""Get an agent's context within a specific project."""
path = self._agent_path(project_id, agent_id)
if not os.path.exists(path):
return {
"agent_id": agent_id,
"total_analyses": 0,
"avg_score": 0,
"common_mistakes": [],
"weakest_dimension": None,
}
with open(path, "r") as f:
return json.load(f)
def update_agent_context(
self, project_id: str, agent_id: str, analysis: dict
) -> None:
"""Update an agent's context within a project after analysis."""
ctx = self.get_agent_context(project_id, agent_id)
# Update running average score
n = ctx["total_analyses"]
old_avg = ctx["avg_score"]
new_score = analysis.get("overall_score", 0)
ctx["total_analyses"] = n + 1
ctx["avg_score"] = round((old_avg * n + new_score) / (n + 1), 1)
# Track agent-specific mistakes
mistake_types = [m.get("type", "unknown") for m in analysis.get("mistakes", [])]
existing = {m["type"]: m.get("count", 0) for m in ctx.get("common_mistakes", []) if isinstance(m, dict)}
for mt in mistake_types:
existing[mt] = existing.get(mt, 0) + 1
ctx["common_mistakes"] = [
{"type": k, "count": v}
for k, v in sorted(existing.items(), key=lambda x: -x[1])
][:5]
# Find weakest dimension
scores = analysis.get("scores", {})
if scores:
if isinstance(scores, dict):
weakest = min(
scores.items(),
key=lambda x: x[1].get("score", 100) if isinstance(x[1], dict) else x[1],
)
ctx["weakest_dimension"] = weakest[0]
path = self._agent_path(project_id, agent_id)
self._ensure_dir(path)
with open(path, "w") as f:
json.dump(ctx, f, indent=2, default=str)
logger.debug(
"Updated agent context: project=%s agent=%s", project_id, agent_id
)
# ── Build context summary for Claude's system prompt ───────
def build_context_summary(
self,
project_id: Optional[str],
source_agent: Optional[str] = None,
) -> str:
"""
Build a context summary string to inject into Claude's system prompt.
Only loads data from the specified project (strict isolation).
Returns empty string if no project_id is provided.
"""
if not project_id:
return ""
parts = []
# Project profile
profile = self.get_project_profile(project_id)
if profile:
parts.append(f"PROJECT: {profile.get('name', project_id)}")
if profile.get("domain"):
parts.append(f"Domain: {profile['domain']}")
if profile.get("description"):
parts.append(f"Description: {profile['description']}")
# Learned patterns
patterns = self.get_patterns(project_id)
if patterns.get("common_mistakes"):
mistakes_str = ", ".join(
f"{m['type']} ({m['count']}x)" for m in patterns["common_mistakes"][:5]
)
parts.append(f"RECURRING MISTAKES IN THIS PROJECT: {mistakes_str}")
if patterns.get("best_templates"):
best = patterns["best_templates"][0]
parts.append(
f"HIGHEST-SCORING PROMPT IN THIS PROJECT (score {best['score']}):\n\"{best['prompt']}\""
)
# Agent-specific context
if source_agent:
agent_ctx = self.get_agent_context(project_id, source_agent)
if agent_ctx.get("total_analyses", 0) > 0:
parts.append(
f"AGENT '{source_agent}' IN THIS PROJECT: "
f"avg score={agent_ctx['avg_score']}, "
f"analyses={agent_ctx['total_analyses']}"
)
if agent_ctx.get("weakest_dimension"):
parts.append(
f"This agent's weakest dimension: {agent_ctx['weakest_dimension']}"
)
if agent_ctx.get("common_mistakes"):
am = ", ".join(
m["type"] for m in agent_ctx["common_mistakes"][:3]
)
parts.append(f"This agent's common mistakes: {am}")
if not parts:
return ""
return (
"\n\n--- PROJECT CONTEXT (use this to make recommendations more specific) ---\n"
+ "\n".join(parts)
+ "\n--- END PROJECT CONTEXT ---"
)

64
prompt_analyzer/models.py Normal file
View File

@ -0,0 +1,64 @@
"""Pydantic models for prompt analysis data structures."""
from __future__ import annotations
from datetime import datetime, timezone
from typing import Optional
from pydantic import BaseModel, Field
class Score(BaseModel):
"""A single dimension score with reasoning."""
score: int = Field(ge=0, le=100, description="Score from 0-100")
reasoning: str = Field(description="Brief explanation for the score")
class Mistake(BaseModel):
"""A specific mistake identified in the prompt."""
type: str = Field(description="Category: vague_instruction, missing_context, redundancy, contradiction, poor_formatting, missing_output_format")
text: Optional[str] = Field(default=None, description="The problematic text from the prompt, if applicable")
suggestion: str = Field(description="How to fix this mistake")
class TokenComparison(BaseModel):
"""Token usage comparison between original and rewritten prompts."""
original_tokens: int = Field(ge=0)
rewritten_tokens: int = Field(ge=0)
savings_percent: float = Field(description="Percentage of tokens saved")
class Scores(BaseModel):
"""All 5 analysis dimension scores."""
clarity: Score
token_efficiency: Score
goal_alignment: Score
structure: Score
vagueness_index: Score
class AnalysisMetadata(BaseModel):
"""Metadata about who/what triggered the analysis."""
project_id: Optional[str] = None
source_agent: Optional[str] = None
target_agent: Optional[str] = None
timestamp: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
mode: str = Field(default="human", description="'human' or 'agent'")
class AnalysisResult(BaseModel):
"""Complete analysis result returned by the Prompt Analyzer."""
original_prompt: str
overall_score: int = Field(ge=0, le=100)
scores: Scores
mistakes: list[Mistake] = Field(default_factory=list)
rewritten_prompt: str
token_comparison: TokenComparison
metadata: AnalysisMetadata = Field(default_factory=AnalysisMetadata)
class AnalyzeRequest(BaseModel):
"""Request payload for prompt analysis."""
prompt: str = Field(min_length=1, description="The prompt to analyze")
context: Optional[str] = Field(default=None, description="Optional goal or context for the prompt")
project_id: Optional[str] = Field(default=None, description="Project ID for context-aware analysis")
source_agent: Optional[str] = Field(default=None, description="Agent that sent this prompt")
target_agent: Optional[str] = Field(default=None, description="Agent this prompt is directed to")

7
requirements.txt Normal file
View File

@ -0,0 +1,7 @@
fastapi
uvicorn[standard]
anthropic
pydantic>=2.0
python-dotenv
tiktoken
aiosqlite