workign need ux improvements
This commit is contained in:
parent
a949eb8e57
commit
b37f444df6
127
ENV_EXAMPLES.md
127
ENV_EXAMPLES.md
|
|
@ -1,127 +0,0 @@
|
|||
# Environment Variables Examples
|
||||
|
||||
## Backend Environment Variables
|
||||
|
||||
Create a `.env` file in `blog-editor/backend/` with the following:
|
||||
|
||||
```env
|
||||
# =====================================================
|
||||
# SERVER CONFIGURATION
|
||||
# =====================================================
|
||||
PORT=5000
|
||||
NODE_ENV=development
|
||||
|
||||
# =====================================================
|
||||
# DATABASE CONFIGURATION (PostgreSQL - Supabase)
|
||||
# =====================================================
|
||||
# Option 1: Use Supabase connection string (recommended)
|
||||
# Format: postgresql://user:password@host:port/database
|
||||
DATABASE_URL=postgresql://postgres.ekqfmpvebntssdgwtioj:[YOUR-PASSWORD]@aws-1-ap-south-1.pooler.supabase.com:5432/postgres
|
||||
|
||||
# Option 2: Use individual parameters (for local development)
|
||||
# Uncomment and use these if not using DATABASE_URL
|
||||
# DB_HOST=localhost
|
||||
# DB_PORT=5432
|
||||
# DB_NAME=blog_editor
|
||||
# DB_USER=postgres
|
||||
# DB_PASSWORD=your_database_password_here
|
||||
|
||||
# =====================================================
|
||||
# AUTH SERVICE INTEGRATION
|
||||
# =====================================================
|
||||
# URL of your existing auth service
|
||||
# The blog editor validates JWT tokens via this service
|
||||
AUTH_SERVICE_URL=http://localhost:3000
|
||||
|
||||
# =====================================================
|
||||
# AWS S3 CONFIGURATION (for image uploads)
|
||||
# =====================================================
|
||||
AWS_REGION=us-east-1
|
||||
AWS_ACCESS_KEY_ID=your_aws_access_key_here
|
||||
AWS_SECRET_ACCESS_KEY=your_aws_secret_key_here
|
||||
S3_BUCKET_NAME=blog-editor-images
|
||||
|
||||
# =====================================================
|
||||
# CORS CONFIGURATION
|
||||
# =====================================================
|
||||
# Frontend URL that will make requests to this backend
|
||||
CORS_ORIGIN=http://localhost:4000
|
||||
|
||||
# Production example:
|
||||
# CORS_ORIGIN=https://your-frontend-domain.com
|
||||
```
|
||||
|
||||
## Frontend Environment Variables
|
||||
|
||||
Create a `.env` file in `blog-editor/frontend/` with the following:
|
||||
|
||||
```env
|
||||
# =====================================================
|
||||
# BLOG EDITOR BACKEND API URL
|
||||
# =====================================================
|
||||
# URL of the blog editor backend API
|
||||
# This is where posts, uploads, etc. are handled
|
||||
VITE_API_URL=http://localhost:5001
|
||||
|
||||
# Production example:
|
||||
# VITE_API_URL=https://api.yourdomain.com
|
||||
|
||||
# =====================================================
|
||||
# AUTH SERVICE API URL
|
||||
# =====================================================
|
||||
# URL of your existing auth service
|
||||
# This is where authentication (login, OTP, etc.) is handled
|
||||
VITE_AUTH_API_URL=http://localhost:3000
|
||||
|
||||
# Production example:
|
||||
# VITE_AUTH_API_URL=https://auth.yourdomain.com
|
||||
```
|
||||
|
||||
## Quick Setup
|
||||
|
||||
### Backend
|
||||
```bash
|
||||
cd blog-editor/backend
|
||||
cp env.example .env
|
||||
# Edit .env with your actual values
|
||||
```
|
||||
|
||||
### Frontend
|
||||
```bash
|
||||
cd blog-editor/frontend
|
||||
cp env.example .env
|
||||
# Edit .env with your actual values
|
||||
```
|
||||
|
||||
## Required Values to Update
|
||||
|
||||
### Backend `.env`
|
||||
- `DATABASE_URL` - **Supabase connection string** (replace `[YOUR-PASSWORD]` with actual password)
|
||||
- Format: `postgresql://postgres.ekqfmpvebntssdgwtioj:[YOUR-PASSWORD]@aws-1-ap-south-1.pooler.supabase.com:5432/postgres`
|
||||
- Or use individual DB_* parameters for local development
|
||||
- `AUTH_SERVICE_URL` - URL where your auth service is running (default: http://localhost:3000)
|
||||
- **Note:** Auth service uses its own separate database
|
||||
- `AWS_ACCESS_KEY_ID` - Your AWS access key
|
||||
- `AWS_SECRET_ACCESS_KEY` - Your AWS secret key
|
||||
- `S3_BUCKET_NAME` - Your S3 bucket name
|
||||
- `CORS_ORIGIN` - Your frontend URL (default: http://localhost:4000)
|
||||
|
||||
### Frontend `.env`
|
||||
- `VITE_API_URL` - Your blog editor backend URL (default: http://localhost:5001)
|
||||
- `VITE_AUTH_API_URL` - Your auth service URL (default: http://localhost:3000)
|
||||
|
||||
## Notes
|
||||
|
||||
1. **VITE_ prefix**: Frontend environment variables must start with `VITE_` to be accessible in the code
|
||||
2. **Database (Supabase)**:
|
||||
- Replace `[YOUR-PASSWORD]` in `DATABASE_URL` with your actual Supabase password
|
||||
- Supabase automatically handles SSL connections
|
||||
- The connection string uses Supabase's connection pooler
|
||||
- Make sure the database exists in Supabase (or use default `postgres` database)
|
||||
3. **Auth Service**:
|
||||
- Ensure your auth service is running on the port specified in `AUTH_SERVICE_URL`
|
||||
- **Important:** Auth service uses its own separate database (not Supabase)
|
||||
4. **AWS S3**:
|
||||
- Create an S3 bucket
|
||||
- Configure CORS to allow PUT requests from your frontend
|
||||
- Create IAM user with `s3:PutObject` and `s3:GetObject` permissions
|
||||
|
|
@ -1,91 +0,0 @@
|
|||
# Auth Service Integration
|
||||
|
||||
The blog editor is integrated with the existing auth service located at `G:\LivingAi\GITTEA_RPO\auth`.
|
||||
|
||||
## How It Works
|
||||
|
||||
### Backend Integration
|
||||
|
||||
The blog editor backend validates JWT tokens by calling the auth service's `/auth/validate-token` endpoint:
|
||||
|
||||
1. Client sends request with `Authorization: Bearer <token>` header
|
||||
2. Blog editor backend middleware (`middleware/auth.js`) extracts the token
|
||||
3. Middleware calls `POST /auth/validate-token` on the auth service
|
||||
4. Auth service validates the token and returns user info
|
||||
5. Blog editor backend sets `req.user` and continues processing
|
||||
|
||||
### Frontend Integration
|
||||
|
||||
The frontend uses the auth service directly for authentication:
|
||||
|
||||
1. **Login Flow:**
|
||||
- User enters phone number
|
||||
- Frontend calls `POST /auth/request-otp` on auth service
|
||||
- User enters OTP
|
||||
- Frontend calls `POST /auth/verify-otp` on auth service
|
||||
- Auth service returns `access_token` and `refresh_token`
|
||||
- Frontend stores tokens in localStorage
|
||||
|
||||
2. **API Requests:**
|
||||
- Frontend includes `Authorization: Bearer <access_token>` header
|
||||
- Blog editor backend validates token via auth service
|
||||
- If token expires, frontend automatically refreshes using `refresh_token`
|
||||
|
||||
## Configuration
|
||||
|
||||
### Backend (.env)
|
||||
```env
|
||||
AUTH_SERVICE_URL=http://localhost:3000
|
||||
```
|
||||
|
||||
### Frontend (.env)
|
||||
```env
|
||||
VITE_AUTH_API_URL=http://localhost:3000
|
||||
```
|
||||
|
||||
## Token Storage
|
||||
|
||||
- `access_token` - Stored in localStorage, used for API requests
|
||||
- `refresh_token` - Stored in localStorage, used to refresh access token
|
||||
- `user` - User object stored in localStorage
|
||||
|
||||
## Authentication Flow
|
||||
|
||||
```
|
||||
┌─────────┐ ┌──────────────┐ ┌─────────────┐
|
||||
│ Client │────────▶│ Auth Service │────────▶│ Blog Editor │
|
||||
│ │ │ │ │ Backend │
|
||||
└─────────┘ └──────────────┘ └─────────────┘
|
||||
│ │ │
|
||||
│ 1. Request OTP │ │
|
||||
│◀─────────────────────│ │
|
||||
│ │ │
|
||||
│ 2. Verify OTP │ │
|
||||
│─────────────────────▶│ │
|
||||
│ 3. Get Tokens │ │
|
||||
│◀─────────────────────│ │
|
||||
│ │ │
|
||||
│ 4. API Request │ │
|
||||
│──────────────────────────────────────────────▶│
|
||||
│ │ 5. Validate Token │
|
||||
│ │◀───────────────────────│
|
||||
│ │ 6. User Info │
|
||||
│ │───────────────────────▶│
|
||||
│ 7. Response │ │
|
||||
│◀──────────────────────────────────────────────│
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **Single Source of Truth:** All authentication handled by one service
|
||||
2. **Consistent Security:** Same JWT validation across all services
|
||||
3. **Token Rotation:** Auth service handles token refresh and rotation
|
||||
4. **User Management:** Centralized user management in auth service
|
||||
5. **Guest Support:** Auth service supports guest users
|
||||
|
||||
## Notes
|
||||
|
||||
- The blog editor backend does NOT handle user registration/login
|
||||
- All authentication is delegated to the auth service
|
||||
- The blog editor only validates tokens, not creates them
|
||||
- Phone/OTP authentication is used (not email/password)
|
||||
|
|
@ -1,66 +0,0 @@
|
|||
# Quick Start Guide
|
||||
|
||||
## Prerequisites Check
|
||||
|
||||
- [ ] Node.js 18+ installed
|
||||
- [ ] PostgreSQL installed and running
|
||||
- [ ] AWS account with S3 bucket created
|
||||
- [ ] AWS IAM user with S3 permissions
|
||||
|
||||
## 5-Minute Setup
|
||||
|
||||
### 1. Backend Setup (2 minutes)
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
npm install
|
||||
cp .env.example .env
|
||||
# Edit .env with your database and AWS credentials
|
||||
createdb blog_editor # or use psql to create database
|
||||
npm run migrate
|
||||
npm run dev
|
||||
```
|
||||
|
||||
### 2. Frontend Setup (2 minutes)
|
||||
|
||||
```bash
|
||||
cd frontend
|
||||
npm install
|
||||
cp .env.example .env
|
||||
# Edit .env: VITE_API_URL=http://localhost:5001
|
||||
npm run dev
|
||||
```
|
||||
|
||||
### 3. Test the Application (1 minute)
|
||||
|
||||
1. Open http://localhost:4000
|
||||
2. Register a new account
|
||||
3. Create a new post
|
||||
4. Add some content with formatting
|
||||
5. Upload an image
|
||||
6. Publish the post
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Database Connection Error
|
||||
- Check PostgreSQL is running: `pg_isready`
|
||||
- Verify credentials in `.env`
|
||||
- Ensure database exists: `psql -l | grep blog_editor`
|
||||
|
||||
### S3 Upload Fails
|
||||
- Verify AWS credentials in `.env`
|
||||
- Check S3 bucket name is correct
|
||||
- Ensure bucket CORS is configured
|
||||
- Verify IAM user has PutObject permission
|
||||
|
||||
### CORS Error
|
||||
- Check `CORS_ORIGIN` in backend `.env` matches frontend URL
|
||||
- Default: `http://localhost:4000`
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Customize the editor styling
|
||||
- Add more TipTap extensions
|
||||
- Configure production environment variables
|
||||
- Set up CI/CD pipeline
|
||||
- Deploy to AWS
|
||||
|
|
@ -1,219 +0,0 @@
|
|||
# How to Run the Blog Editor Application
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. **Node.js 18+** installed
|
||||
2. **PostgreSQL/Supabase** database configured
|
||||
3. **Auth service** running (at `G:\LivingAi\GITTEA_RPO\auth`)
|
||||
4. **AWS S3** configured (for image uploads)
|
||||
|
||||
## Step-by-Step Setup
|
||||
|
||||
### 1. Start the Auth Service (Required First)
|
||||
|
||||
The blog editor depends on your existing auth service. Make sure it's running:
|
||||
|
||||
```bash
|
||||
cd G:\LivingAi\GITTEA_RPO\auth
|
||||
npm install # If not already done
|
||||
npm start # or npm run dev
|
||||
```
|
||||
|
||||
The auth service should be running on `http://localhost:3000` (or your configured port).
|
||||
|
||||
### 2. Setup Backend
|
||||
|
||||
#### Install Dependencies
|
||||
```bash
|
||||
cd blog-editor/backend
|
||||
npm install
|
||||
```
|
||||
|
||||
#### Configure Environment
|
||||
Make sure you have a `.env` file in `blog-editor/backend/`:
|
||||
```bash
|
||||
# If you haven't created it yet
|
||||
cp env.example .env
|
||||
# Then edit .env with your actual values
|
||||
```
|
||||
|
||||
Your `.env` should have:
|
||||
- `DATABASE_URL` - Your Supabase connection string
|
||||
- `AUTH_SERVICE_URL` - URL of auth service (default: http://localhost:3000)
|
||||
- AWS credentials for S3
|
||||
- Other required variables
|
||||
|
||||
#### Run Database Migrations
|
||||
```bash
|
||||
npm run migrate
|
||||
```
|
||||
|
||||
This will create the `posts` table and indexes in your Supabase database.
|
||||
|
||||
#### Start Backend Server
|
||||
```bash
|
||||
npm run dev
|
||||
```
|
||||
|
||||
The backend will start on `http://localhost:5001` (or your configured PORT).
|
||||
|
||||
You should see:
|
||||
```
|
||||
Server running on port 5001
|
||||
```
|
||||
|
||||
### 3. Setup Frontend
|
||||
|
||||
#### Install Dependencies
|
||||
```bash
|
||||
cd blog-editor/frontend
|
||||
npm install
|
||||
```
|
||||
|
||||
#### Configure Environment
|
||||
Make sure you have a `.env` file in `blog-editor/frontend/`:
|
||||
```bash
|
||||
# If you haven't created it yet
|
||||
cp env.example .env
|
||||
# Then edit .env with your actual values
|
||||
```
|
||||
|
||||
Your `.env` should have:
|
||||
- `VITE_API_URL=http://localhost:5001` - Backend API URL
|
||||
- `VITE_AUTH_API_URL=http://localhost:3000` - Auth service URL
|
||||
|
||||
#### Start Frontend Dev Server
|
||||
```bash
|
||||
npm run dev
|
||||
```
|
||||
|
||||
The frontend will start on `http://localhost:4000`.
|
||||
|
||||
You should see:
|
||||
```
|
||||
VITE v5.x.x ready in xxx ms
|
||||
|
||||
➜ Local: http://localhost:4000/
|
||||
➜ Network: use --host to expose
|
||||
```
|
||||
|
||||
## Running Everything Together
|
||||
|
||||
### Option 1: Separate Terminals (Recommended)
|
||||
|
||||
**Terminal 1 - Auth Service:**
|
||||
```bash
|
||||
cd G:\LivingAi\GITTEA_RPO\auth
|
||||
npm start
|
||||
```
|
||||
|
||||
**Terminal 2 - Blog Editor Backend:**
|
||||
```bash
|
||||
cd blog-editor/backend
|
||||
npm run dev
|
||||
```
|
||||
|
||||
**Terminal 3 - Blog Editor Frontend:**
|
||||
```bash
|
||||
cd blog-editor/frontend
|
||||
npm run dev
|
||||
```
|
||||
|
||||
### Option 2: Using npm scripts (if you create them)
|
||||
|
||||
You could create a root `package.json` with scripts to run everything, but separate terminals are easier for debugging.
|
||||
|
||||
## Verify Everything is Working
|
||||
|
||||
### 1. Check Auth Service
|
||||
```bash
|
||||
curl http://localhost:3000/health
|
||||
# Should return: {"ok":true}
|
||||
```
|
||||
|
||||
### 2. Check Backend
|
||||
```bash
|
||||
curl http://localhost:5000/api/health
|
||||
# Should return: {"status":"ok"}
|
||||
```
|
||||
|
||||
### 3. Check Database Connection
|
||||
```bash
|
||||
curl http://localhost:5000/api/test-db
|
||||
# Should return database connection info
|
||||
```
|
||||
|
||||
### 4. Open Frontend
|
||||
Open your browser to the frontend URL (usually `http://localhost:5173` or `http://localhost:3000`)
|
||||
|
||||
## First Time Usage
|
||||
|
||||
1. **Open the frontend** in your browser
|
||||
2. **Click Login** (or navigate to `/login`)
|
||||
3. **Enter your phone number** (e.g., `+919876543210` or `9876543210`)
|
||||
4. **Request OTP** - You'll receive an OTP via SMS (or console if using test numbers)
|
||||
5. **Enter OTP** to verify
|
||||
6. **You'll be logged in** and redirected to the dashboard
|
||||
7. **Create your first post** by clicking "New Post"
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Backend won't start
|
||||
- Check if port 5001 is already in use
|
||||
- Verify `.env` file exists and has correct values
|
||||
- Check database connection string is correct
|
||||
- Ensure auth service is running
|
||||
|
||||
### Frontend won't start
|
||||
- Check if port is already in use (Vite will auto-select another port)
|
||||
- Verify `.env` file exists with `VITE_` prefixed variables
|
||||
- Check that backend is running
|
||||
|
||||
### Database connection errors
|
||||
- Verify Supabase connection string is correct
|
||||
- Check that password doesn't have special characters that need URL encoding
|
||||
- Ensure Supabase database is accessible
|
||||
- Check IP whitelist in Supabase settings
|
||||
|
||||
### Auth service connection errors
|
||||
- Verify auth service is running on the correct port
|
||||
- Check `AUTH_SERVICE_URL` in backend `.env`
|
||||
- Check `VITE_AUTH_API_URL` in frontend `.env`
|
||||
|
||||
### CORS errors
|
||||
- Verify `CORS_ORIGIN` in backend `.env` matches frontend URL
|
||||
- Check that auth service CORS settings allow your frontend origin
|
||||
|
||||
## Production Build
|
||||
|
||||
### Build Frontend
|
||||
```bash
|
||||
cd blog-editor/frontend
|
||||
npm run build
|
||||
```
|
||||
|
||||
The built files will be in `blog-editor/frontend/dist/`
|
||||
|
||||
### Start Backend in Production
|
||||
```bash
|
||||
cd blog-editor/backend
|
||||
NODE_ENV=production npm start
|
||||
```
|
||||
|
||||
## Quick Commands Reference
|
||||
|
||||
```bash
|
||||
# Backend
|
||||
cd blog-editor/backend
|
||||
npm install # Install dependencies
|
||||
npm run migrate # Run database migrations
|
||||
npm run dev # Start dev server
|
||||
npm start # Start production server
|
||||
|
||||
# Frontend
|
||||
cd blog-editor/frontend
|
||||
npm install # Install dependencies
|
||||
npm run dev # Start dev server
|
||||
npm run build # Build for production
|
||||
npm run preview # Preview production build
|
||||
```
|
||||
123
S3_CORS_SETUP.md
123
S3_CORS_SETUP.md
|
|
@ -1,123 +0,0 @@
|
|||
# S3 CORS Configuration Guide
|
||||
|
||||
## Problem
|
||||
If you're getting "Failed to fetch" error when uploading images, it's likely a CORS (Cross-Origin Resource Sharing) issue with your S3 bucket.
|
||||
|
||||
## Solution: Configure S3 Bucket CORS
|
||||
|
||||
### Step 1: Go to AWS S3 Console
|
||||
1. Log in to AWS Console
|
||||
2. Navigate to S3
|
||||
3. Click on your bucket (e.g., `livingai-media-bucket`)
|
||||
|
||||
### Step 2: Configure CORS
|
||||
1. Click on the **Permissions** tab
|
||||
2. Scroll down to **Cross-origin resource sharing (CORS)**
|
||||
3. Click **Edit**
|
||||
|
||||
### Step 3: Add CORS Configuration
|
||||
Paste this CORS configuration:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"AllowedHeaders": [
|
||||
"*"
|
||||
],
|
||||
"AllowedMethods": [
|
||||
"GET",
|
||||
"PUT",
|
||||
"POST",
|
||||
"HEAD"
|
||||
],
|
||||
"AllowedOrigins": [
|
||||
"http://localhost:4000",
|
||||
"http://localhost:3000",
|
||||
"http://localhost:5173",
|
||||
"https://your-production-domain.com"
|
||||
],
|
||||
"ExposeHeaders": [
|
||||
"ETag"
|
||||
],
|
||||
"MaxAgeSeconds": 3000
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
**Important:**
|
||||
- Replace `https://your-production-domain.com` with your actual production domain
|
||||
- Add any other origins you need (e.g., staging domains)
|
||||
|
||||
### Step 4: Save Configuration
|
||||
1. Click **Save changes**
|
||||
2. Wait a few seconds for the changes to propagate
|
||||
|
||||
### Step 5: Test Again
|
||||
Try uploading an image again. The CORS error should be resolved.
|
||||
|
||||
## Alternative: Bucket Policy (if CORS doesn't work)
|
||||
|
||||
If CORS still doesn't work, you may also need to configure the bucket policy:
|
||||
|
||||
1. Go to **Permissions** tab
|
||||
2. Click **Bucket policy**
|
||||
3. Add this policy (replace `YOUR-BUCKET-NAME` and `YOUR-ACCOUNT-ID`):
|
||||
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "PublicReadGetObject",
|
||||
"Effect": "Allow",
|
||||
"Principal": "*",
|
||||
"Action": "s3:GetObject",
|
||||
"Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
|
||||
},
|
||||
{
|
||||
"Sid": "AllowPutObject",
|
||||
"Effect": "Allow",
|
||||
"Principal": "*",
|
||||
"Action": "s3:PutObject",
|
||||
"Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** This makes your bucket publicly writable. For production, use IAM roles or signed URLs (which you're already using).
|
||||
|
||||
## Verify CORS is Working
|
||||
|
||||
After configuring CORS, check the browser console. You should see:
|
||||
- No CORS errors
|
||||
- Successful PUT request to S3
|
||||
- Image uploads working
|
||||
|
||||
## Common Issues
|
||||
|
||||
### Issue 1: CORS still not working
|
||||
- **Solution:** Clear browser cache and try again
|
||||
- **Solution:** Make sure the origin in CORS matches exactly (including http vs https, port numbers)
|
||||
|
||||
### Issue 2: "Access Denied" error
|
||||
- **Solution:** Check IAM permissions for your AWS credentials
|
||||
- **Solution:** Ensure your AWS user has `s3:PutObject` permission
|
||||
|
||||
### Issue 3: Presigned URL expires
|
||||
- **Solution:** The presigned URL expires in 3600 seconds (1 hour). If you wait too long, generate a new one.
|
||||
|
||||
## Testing CORS Configuration
|
||||
|
||||
You can test if CORS is configured correctly using curl:
|
||||
|
||||
```bash
|
||||
curl -X OPTIONS \
|
||||
-H "Origin: http://localhost:4000" \
|
||||
-H "Access-Control-Request-Method: PUT" \
|
||||
-H "Access-Control-Request-Headers: Content-Type" \
|
||||
https://YOUR-BUCKET-NAME.s3.REGION.amazonaws.com/images/test.jpg \
|
||||
-v
|
||||
```
|
||||
|
||||
You should see `Access-Control-Allow-Origin` in the response headers.
|
||||
|
|
@ -1,104 +0,0 @@
|
|||
# Supabase Database Setup
|
||||
|
||||
The blog editor uses Supabase PostgreSQL for storing blog posts. The auth service uses its own separate database.
|
||||
|
||||
## Connection String Format
|
||||
|
||||
Your Supabase connection string should look like:
|
||||
```
|
||||
postgresql://postgres.ekqfmpvebntssdgwtioj:[YOUR-PASSWORD]@aws-1-ap-south-1.pooler.supabase.com:5432/postgres
|
||||
```
|
||||
|
||||
## Setup Steps
|
||||
|
||||
### 1. Get Your Supabase Connection String
|
||||
|
||||
1. Go to your Supabase project dashboard
|
||||
2. Navigate to **Settings** → **Database**
|
||||
3. Find the **Connection string** section
|
||||
4. Copy the **Connection pooling** connection string (recommended)
|
||||
5. Replace `[YOUR-PASSWORD]` with your actual database password
|
||||
|
||||
### 2. Update Backend `.env`
|
||||
|
||||
Add to `blog-editor/backend/.env`:
|
||||
```env
|
||||
DATABASE_URL=postgresql://postgres.ekqfmpvebntssdgwtioj:your_actual_password@aws-1-ap-south-1.pooler.supabase.com:5432/postgres
|
||||
```
|
||||
|
||||
### 3. Create Database Schema
|
||||
|
||||
Run the migrations to create the required tables:
|
||||
```bash
|
||||
cd blog-editor/backend
|
||||
npm run migrate
|
||||
```
|
||||
|
||||
This will create:
|
||||
- `users` table (if not exists - though auth service has its own users table)
|
||||
- `posts` table for blog posts
|
||||
- Required indexes
|
||||
|
||||
### 4. Verify Connection
|
||||
|
||||
Test the database connection:
|
||||
```bash
|
||||
# The backend has a test endpoint
|
||||
curl http://localhost:5001/api/test-db
|
||||
```
|
||||
|
||||
## Database Schema
|
||||
|
||||
The blog editor creates these tables in Supabase:
|
||||
|
||||
### `posts` table
|
||||
- `id` (UUID, Primary Key)
|
||||
- `user_id` (UUID, Foreign Key - references auth service user ID)
|
||||
- `title` (VARCHAR)
|
||||
- `content_json` (JSONB) - TipTap editor content
|
||||
- `slug` (VARCHAR, Unique)
|
||||
- `status` (VARCHAR: 'draft' or 'published')
|
||||
- `created_at` (TIMESTAMP)
|
||||
- `updated_at` (TIMESTAMP)
|
||||
|
||||
### Indexes
|
||||
- `idx_posts_user_id` - For fast user queries
|
||||
- `idx_posts_slug` - For fast slug lookups
|
||||
- `idx_posts_status` - For filtering by status
|
||||
|
||||
## Important Notes
|
||||
|
||||
1. **Separate Databases**:
|
||||
- Blog editor uses Supabase PostgreSQL
|
||||
- Auth service uses its own separate database
|
||||
- User IDs from auth service are stored as `user_id` in posts table
|
||||
|
||||
2. **Connection Pooling**:
|
||||
- Supabase connection string uses their pooler
|
||||
- This is more efficient for serverless/server applications
|
||||
- SSL is automatically handled
|
||||
|
||||
3. **User IDs**:
|
||||
- The `user_id` in posts table references the user ID from your auth service
|
||||
- Make sure the auth service user IDs are UUIDs (which they should be)
|
||||
|
||||
4. **Database Name**:
|
||||
- Default Supabase database is `postgres`
|
||||
- You can create a separate database if needed, just update the connection string
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Connection Issues
|
||||
- Verify your password is correct
|
||||
- Check that your IP is allowed in Supabase (Settings → Database → Connection Pooling)
|
||||
- Ensure you're using the connection pooling URL (not direct connection)
|
||||
|
||||
### Migration Issues
|
||||
- Make sure you have proper permissions on the database
|
||||
- Check that the database exists
|
||||
- Verify the connection string format is correct
|
||||
|
||||
### SSL Issues
|
||||
- Supabase requires SSL connections
|
||||
- The code automatically sets `rejectUnauthorized: false` for Supabase
|
||||
- This is safe because Supabase uses valid SSL certificates
|
||||
|
|
@ -1,117 +0,0 @@
|
|||
# Troubleshooting Guide
|
||||
|
||||
## "Failed to fetch" Error When Uploading Images
|
||||
|
||||
This error means the frontend cannot connect to the backend API. Check the following:
|
||||
|
||||
### 1. Check Backend is Running
|
||||
|
||||
Make sure your backend server is running:
|
||||
```bash
|
||||
cd blog-editor/backend
|
||||
npm run dev
|
||||
```
|
||||
|
||||
You should see:
|
||||
```
|
||||
✅ Blog Editor Backend is running!
|
||||
🌐 Server: http://localhost:5001
|
||||
```
|
||||
|
||||
### 2. Check Frontend API URL
|
||||
|
||||
In `blog-editor/frontend/.env`, make sure:
|
||||
```env
|
||||
VITE_API_URL=http://localhost:5001
|
||||
```
|
||||
|
||||
**Important:** The port must match your backend port (check your backend terminal output).
|
||||
|
||||
### 3. Check Browser Console
|
||||
|
||||
Open browser DevTools (F12) → Console tab and look for:
|
||||
- Network errors
|
||||
- CORS errors
|
||||
- 404 errors
|
||||
- Connection refused errors
|
||||
|
||||
### 4. Test Backend Manually
|
||||
|
||||
Open in browser or use curl:
|
||||
```bash
|
||||
# Health check
|
||||
curl http://localhost:5001/api/health
|
||||
|
||||
# Should return: {"status":"ok"}
|
||||
```
|
||||
|
||||
### 5. Check CORS Configuration
|
||||
|
||||
In `blog-editor/backend/.env`:
|
||||
```env
|
||||
CORS_ORIGIN=http://localhost:4000
|
||||
```
|
||||
|
||||
Make sure this matches your frontend URL.
|
||||
|
||||
### 6. Check AWS S3 Configuration
|
||||
|
||||
If you see "AWS S3 is not configured" error:
|
||||
|
||||
In `blog-editor/backend/.env`, add:
|
||||
```env
|
||||
AWS_REGION=us-east-1
|
||||
AWS_ACCESS_KEY_ID=your_access_key
|
||||
AWS_SECRET_ACCESS_KEY=your_secret_key
|
||||
S3_BUCKET_NAME=blog-editor-images
|
||||
```
|
||||
|
||||
**Note:** Image uploads won't work without AWS S3 configured. You can:
|
||||
- Set up AWS S3 (recommended for production)
|
||||
- Or temporarily disable image uploads for testing
|
||||
|
||||
### 7. Check Authentication Token
|
||||
|
||||
Make sure you're logged in. The upload endpoint requires authentication.
|
||||
|
||||
Check browser console → Application → Local Storage:
|
||||
- Should have `access_token`
|
||||
- Should have `refresh_token`
|
||||
|
||||
### 8. Common Issues
|
||||
|
||||
**Issue:** Backend on different port
|
||||
- **Fix:** Update `VITE_API_URL` in frontend `.env` to match backend port
|
||||
|
||||
**Issue:** CORS blocking requests
|
||||
- **Fix:** Update `CORS_ORIGIN` in backend `.env` to match frontend URL
|
||||
|
||||
**Issue:** Backend not running
|
||||
- **Fix:** Start backend: `cd blog-editor/backend && npm run dev`
|
||||
|
||||
**Issue:** Network error
|
||||
- **Fix:** Check firewall, VPN, or proxy settings
|
||||
|
||||
### 9. Test Upload Endpoint Directly
|
||||
|
||||
```bash
|
||||
# Get your access token from browser localStorage
|
||||
# Then test:
|
||||
curl -X POST http://localhost:5001/api/upload/presigned-url \
|
||||
-H "Authorization: Bearer YOUR_ACCESS_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"filename":"test.jpg","contentType":"image/jpeg"}'
|
||||
```
|
||||
|
||||
### 10. Enable Detailed Logging
|
||||
|
||||
Check backend terminal for error messages when you try to upload.
|
||||
|
||||
## Quick Fix Checklist
|
||||
|
||||
- [ ] Backend is running (check terminal)
|
||||
- [ ] Frontend `.env` has correct `VITE_API_URL`
|
||||
- [ ] Backend `.env` has correct `CORS_ORIGIN`
|
||||
- [ ] You're logged in (check localStorage for tokens)
|
||||
- [ ] Browser console shows no CORS errors
|
||||
- [ ] AWS S3 is configured (if using image uploads)
|
||||
|
|
@ -7,34 +7,172 @@ const { Pool } = pkg
|
|||
|
||||
// Support both connection string (Supabase) and individual parameters
|
||||
let poolConfig
|
||||
let pool = null
|
||||
|
||||
if (process.env.DATABASE_URL) {
|
||||
// Use connection string (Supabase format)
|
||||
poolConfig = {
|
||||
connectionString: process.env.DATABASE_URL,
|
||||
ssl: {
|
||||
rejectUnauthorized: false // Supabase requires SSL
|
||||
},
|
||||
// Connection pool settings for Supabase
|
||||
max: 20, // Maximum number of clients in the pool
|
||||
idleTimeoutMillis: 30000,
|
||||
connectionTimeoutMillis: 2000,
|
||||
// Validate and prepare pool configuration
|
||||
function createPoolConfig() {
|
||||
if (process.env.DATABASE_URL) {
|
||||
// Use connection string (Supabase format)
|
||||
// Validate connection string format
|
||||
try {
|
||||
const url = new URL(process.env.DATABASE_URL)
|
||||
|
||||
// Check for placeholder passwords
|
||||
if (!url.password || url.password === '[YOUR-PASSWORD]' || url.password.includes('YOUR-PASSWORD')) {
|
||||
const error = new Error('DATABASE_URL contains placeholder password. Please replace [YOUR-PASSWORD] with your actual Supabase password.')
|
||||
error.code = 'INVALID_PASSWORD'
|
||||
throw error
|
||||
}
|
||||
|
||||
if (url.password.length < 1) {
|
||||
console.warn('⚠️ DATABASE_URL appears to be missing password. Check your .env file.')
|
||||
}
|
||||
} catch (e) {
|
||||
if (e.code === 'INVALID_PASSWORD') {
|
||||
throw e
|
||||
}
|
||||
console.error('❌ Invalid DATABASE_URL format. Expected: postgresql://user:password@host:port/database')
|
||||
throw new Error('Invalid DATABASE_URL format')
|
||||
}
|
||||
|
||||
poolConfig = {
|
||||
connectionString: process.env.DATABASE_URL,
|
||||
ssl: {
|
||||
rejectUnauthorized: false // Supabase requires SSL
|
||||
},
|
||||
// Connection pool settings for Supabase
|
||||
// Reduced max connections to prevent pool limit issues when running multiple apps
|
||||
max: 5, // Maximum number of clients in the pool (reduced for hot reload compatibility)
|
||||
idleTimeoutMillis: 30000,
|
||||
connectionTimeoutMillis: 10000, // Increased timeout for Supabase
|
||||
allowExitOnIdle: false,
|
||||
}
|
||||
} else {
|
||||
// Use individual parameters (local development)
|
||||
poolConfig = {
|
||||
host: process.env.DB_HOST || 'localhost',
|
||||
port: process.env.DB_PORT || 5432,
|
||||
database: process.env.DB_NAME || 'blog_editor',
|
||||
user: process.env.DB_USER || 'postgres',
|
||||
password: process.env.DB_PASSWORD,
|
||||
ssl: process.env.NODE_ENV === 'production' ? { rejectUnauthorized: false } : false,
|
||||
connectionTimeoutMillis: 10000,
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Use individual parameters (local development)
|
||||
poolConfig = {
|
||||
host: process.env.DB_HOST || 'localhost',
|
||||
port: process.env.DB_PORT || 5432,
|
||||
database: process.env.DB_NAME || 'blog_editor',
|
||||
user: process.env.DB_USER || 'postgres',
|
||||
password: process.env.DB_PASSWORD,
|
||||
ssl: process.env.NODE_ENV === 'production' ? { rejectUnauthorized: false } : false,
|
||||
|
||||
return poolConfig
|
||||
}
|
||||
|
||||
// Initialize pool
|
||||
try {
|
||||
poolConfig = createPoolConfig()
|
||||
pool = new Pool(poolConfig)
|
||||
} catch (error) {
|
||||
if (error.code === 'INVALID_PASSWORD') {
|
||||
console.error('\n❌ ' + error.message)
|
||||
console.error('💡 Please update your .env file with the correct DATABASE_URL')
|
||||
console.error('💡 Format: postgresql://postgres.xxx:YOUR_ACTUAL_PASSWORD@aws-1-ap-south-1.pooler.supabase.com:5432/postgres\n')
|
||||
}
|
||||
// Create a dummy pool to prevent crashes, but it won't work
|
||||
pool = new Pool({ connectionString: 'postgresql://invalid' })
|
||||
}
|
||||
|
||||
// Reset pool function for recovery from authentication errors
|
||||
export async function resetPool() {
|
||||
if (pool) {
|
||||
try {
|
||||
await pool.end() // Wait for pool to fully close
|
||||
} catch (err) {
|
||||
// Ignore errors during pool closure
|
||||
}
|
||||
pool = null
|
||||
}
|
||||
|
||||
// Wait a moment for Supabase circuit breaker to potentially reset
|
||||
await new Promise(resolve => setTimeout(resolve, 2000))
|
||||
|
||||
try {
|
||||
poolConfig = createPoolConfig()
|
||||
pool = new Pool(poolConfig)
|
||||
setupPoolHandlers()
|
||||
return true
|
||||
} catch (error) {
|
||||
console.error('❌ Failed to reset connection pool:', error.message)
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
export const pool = new Pool(poolConfig)
|
||||
// Setup pool error handlers
|
||||
function setupPoolHandlers() {
|
||||
if (pool) {
|
||||
pool.on('error', (err) => {
|
||||
console.error('❌ Unexpected error on idle database client:', err.message)
|
||||
// Don't exit on error - let the application handle it
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
pool.on('error', (err) => {
|
||||
console.error('Unexpected error on idle client', err)
|
||||
process.exit(-1)
|
||||
})
|
||||
setupPoolHandlers()
|
||||
|
||||
export { pool }
|
||||
|
||||
// Helper function to test connection and provide better error messages
|
||||
export async function testConnection(retryCount = 0) {
|
||||
try {
|
||||
// If pool is null or invalid, try to recreate it
|
||||
if (!pool || pool.ended) {
|
||||
console.log(' 🔄 Recreating connection pool...')
|
||||
await resetPool()
|
||||
}
|
||||
|
||||
const client = await pool.connect()
|
||||
const result = await client.query('SELECT NOW()')
|
||||
client.release()
|
||||
return { success: true, time: result.rows[0].now }
|
||||
} catch (error) {
|
||||
// Handle authentication errors
|
||||
if (error.message.includes('password authentication failed') ||
|
||||
error.message.includes('password') && error.message.includes('failed')) {
|
||||
const err = new Error('Database authentication failed. Check your password in DATABASE_URL')
|
||||
err.code = 'AUTH_FAILED'
|
||||
throw err
|
||||
}
|
||||
// Handle circuit breaker / too many attempts
|
||||
else if (error.message.includes('Circuit breaker') ||
|
||||
error.message.includes('too many') ||
|
||||
error.message.includes('connection attempts') ||
|
||||
error.message.includes('rate limit') ||
|
||||
error.code === '53300') { // PostgreSQL error code for too many connections
|
||||
// If this is the first retry, try resetting the pool and waiting
|
||||
if (retryCount === 0) {
|
||||
console.log(' ⏳ Circuit breaker detected. Waiting and retrying...')
|
||||
await new Promise(resolve => setTimeout(resolve, 3000)) // Wait 3 seconds
|
||||
await resetPool()
|
||||
// Retry once
|
||||
return testConnection(1)
|
||||
}
|
||||
const err = new Error('Too many failed connection attempts. Supabase connection pooler has temporarily blocked connections. Please wait 30-60 seconds and restart the server, or verify your DATABASE_URL password is correct.')
|
||||
err.code = 'CIRCUIT_BREAKER'
|
||||
throw err
|
||||
}
|
||||
// Handle host resolution errors
|
||||
else if (error.message.includes('ENOTFOUND') || error.message.includes('getaddrinfo')) {
|
||||
const err = new Error('Cannot resolve database host. Check your DATABASE_URL hostname.')
|
||||
err.code = 'HOST_ERROR'
|
||||
throw err
|
||||
}
|
||||
// Handle timeout errors
|
||||
else if (error.message.includes('timeout') || error.message.includes('ETIMEDOUT')) {
|
||||
const err = new Error('Database connection timeout. Check if the database is accessible and your network connection.')
|
||||
err.code = 'TIMEOUT'
|
||||
throw err
|
||||
}
|
||||
// Handle invalid connection string
|
||||
else if (error.message.includes('invalid connection') || error.message.includes('connection string')) {
|
||||
const err = new Error('Invalid DATABASE_URL format. Expected: postgresql://user:password@host:port/database')
|
||||
err.code = 'INVALID_FORMAT'
|
||||
throw err
|
||||
}
|
||||
throw error
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import { S3Client } from '@aws-sdk/client-s3'
|
||||
import { getSignedUrl } from '@aws-sdk/s3-request-presigner'
|
||||
import { PutObjectCommand, HeadBucketCommand } from '@aws-sdk/client-s3'
|
||||
import { PutObjectCommand, ListObjectsV2Command } from '@aws-sdk/client-s3'
|
||||
import { v4 as uuid } from 'uuid'
|
||||
import dotenv from 'dotenv'
|
||||
import logger from '../utils/logger.js'
|
||||
|
|
@ -19,10 +19,13 @@ export const isS3Configured = () => {
|
|||
// Get bucket name (support both env var names)
|
||||
export const BUCKET_NAME = process.env.S3_BUCKET_NAME || process.env.AWS_BUCKET_NAME
|
||||
|
||||
// Get AWS region (default to us-east-1 if not specified)
|
||||
export const AWS_REGION = process.env.AWS_REGION || 'us-east-1'
|
||||
|
||||
// Only create S3 client if credentials are available
|
||||
export const s3Client = isS3Configured()
|
||||
? new S3Client({
|
||||
region: process.env.AWS_REGION || 'us-east-1',
|
||||
region: AWS_REGION,
|
||||
credentials: {
|
||||
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
|
||||
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
|
||||
|
|
@ -30,8 +33,8 @@ export const s3Client = isS3Configured()
|
|||
})
|
||||
: null
|
||||
|
||||
// Export HeadBucketCommand for health checks
|
||||
export { HeadBucketCommand }
|
||||
// Export ListObjectsV2Command for health checks (only requires s3:ListBucket permission)
|
||||
export { ListObjectsV2Command }
|
||||
|
||||
export async function getPresignedUploadUrl(filename, contentType) {
|
||||
logger.s3('PRESIGNED_URL_REQUEST', { filename, contentType })
|
||||
|
|
@ -71,7 +74,8 @@ export async function getPresignedUploadUrl(filename, contentType) {
|
|||
const startTime = Date.now()
|
||||
const uploadUrl = await getSignedUrl(s3Client, command, { expiresIn: 3600 })
|
||||
const duration = Date.now() - startTime
|
||||
const imageUrl = `https://${BUCKET_NAME}.s3.${process.env.AWS_REGION || 'us-east-1'}.amazonaws.com/${key}`
|
||||
// Generate S3 public URL (works for all standard AWS regions)
|
||||
const imageUrl = `https://${BUCKET_NAME}.s3.${AWS_REGION}.amazonaws.com/${key}`
|
||||
|
||||
logger.s3('PRESIGNED_URL_CREATED', {
|
||||
key,
|
||||
|
|
|
|||
|
|
@ -7,7 +7,8 @@
|
|||
"scripts": {
|
||||
"start": "node server.js",
|
||||
"dev": "node --watch server.js",
|
||||
"migrate": "node migrations/migrate.js"
|
||||
"migrate": "node migrations/migrate.js",
|
||||
"test-s3": "node test-s3-access.js"
|
||||
},
|
||||
"dependencies": {
|
||||
"@aws-sdk/client-s3": "^3.490.0",
|
||||
|
|
@ -16,7 +17,6 @@
|
|||
"cors": "^2.8.5",
|
||||
"dotenv": "^16.3.1",
|
||||
"express": "^4.18.2",
|
||||
"multer": "^2.0.2",
|
||||
"pg": "^8.11.3",
|
||||
"slugify": "^1.6.6",
|
||||
"uuid": "^9.0.1"
|
||||
|
|
|
|||
|
|
@ -1,48 +1,9 @@
|
|||
import express from 'express'
|
||||
import multer from 'multer'
|
||||
import path from 'path'
|
||||
import { fileURLToPath } from 'url'
|
||||
import fs from 'fs'
|
||||
import { getPresignedUploadUrl } from '../config/s3.js'
|
||||
import logger from '../utils/logger.js'
|
||||
import { v4 as uuid } from 'uuid'
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url)
|
||||
const __dirname = path.dirname(__filename)
|
||||
|
||||
const router = express.Router()
|
||||
|
||||
// Configure multer for local file storage (TEMPORARY - FOR TESTING ONLY)
|
||||
const storage = multer.diskStorage({
|
||||
destination: (req, file, cb) => {
|
||||
const uploadDir = path.join(__dirname, '..', 'images')
|
||||
// Ensure directory exists
|
||||
if (!fs.existsSync(uploadDir)) {
|
||||
fs.mkdirSync(uploadDir, { recursive: true })
|
||||
}
|
||||
cb(null, uploadDir)
|
||||
},
|
||||
filename: (req, file, cb) => {
|
||||
const ext = path.extname(file.originalname)
|
||||
const filename = `${uuid()}${ext}`
|
||||
cb(null, filename)
|
||||
}
|
||||
})
|
||||
|
||||
const upload = multer({
|
||||
storage: storage,
|
||||
limits: {
|
||||
fileSize: 10 * 1024 * 1024 // 10MB limit
|
||||
},
|
||||
fileFilter: (req, file, cb) => {
|
||||
if (file.mimetype.startsWith('image/')) {
|
||||
cb(null, true)
|
||||
} else {
|
||||
cb(new Error('Only image files are allowed'), false)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// Get presigned URL for image upload
|
||||
// Note: authenticateToken middleware is applied at server level
|
||||
router.post('/presigned-url', async (req, res) => {
|
||||
|
|
@ -128,42 +89,4 @@ router.post('/presigned-url', async (req, res) => {
|
|||
}
|
||||
})
|
||||
|
||||
// TEMPORARY: Local file upload endpoint (FOR TESTING ONLY - REMOVE IN PRODUCTION)
|
||||
router.post('/local', upload.single('image'), async (req, res) => {
|
||||
try {
|
||||
if (!req.file) {
|
||||
logger.warn('UPLOAD', 'No file uploaded', null)
|
||||
return res.status(400).json({ message: 'No image file provided' })
|
||||
}
|
||||
|
||||
logger.transaction('LOCAL_IMAGE_UPLOAD', {
|
||||
userId: req.user.id,
|
||||
filename: req.file.filename,
|
||||
originalName: req.file.originalname,
|
||||
size: req.file.size
|
||||
})
|
||||
|
||||
// Return the image URL (served statically)
|
||||
const imageUrl = `/api/images/${req.file.filename}`
|
||||
|
||||
logger.transaction('LOCAL_IMAGE_UPLOAD_SUCCESS', {
|
||||
userId: req.user.id,
|
||||
filename: req.file.filename,
|
||||
imageUrl
|
||||
})
|
||||
|
||||
res.json({
|
||||
imageUrl,
|
||||
filename: req.file.filename,
|
||||
size: req.file.size
|
||||
})
|
||||
} catch (error) {
|
||||
logger.error('UPLOAD', 'Error uploading local image', error)
|
||||
res.status(500).json({
|
||||
message: 'Failed to upload image',
|
||||
error: error.message
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
export default router
|
||||
|
|
|
|||
|
|
@ -2,18 +2,13 @@ import express from 'express'
|
|||
import cors from 'cors'
|
||||
import dotenv from 'dotenv'
|
||||
import axios from 'axios'
|
||||
import path from 'path'
|
||||
import { fileURLToPath } from 'url'
|
||||
import { pool } from './config/database.js'
|
||||
import { pool, testConnection, resetPool } from './config/database.js'
|
||||
import { authenticateToken } from './middleware/auth.js'
|
||||
import { s3Client, BUCKET_NAME, HeadBucketCommand, isS3Configured } from './config/s3.js'
|
||||
import { s3Client, BUCKET_NAME, ListObjectsV2Command, isS3Configured } from './config/s3.js'
|
||||
import postRoutes from './routes/posts.js'
|
||||
import uploadRoutes from './routes/upload.js'
|
||||
import logger from './utils/logger.js'
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url)
|
||||
const __dirname = path.dirname(__filename)
|
||||
|
||||
dotenv.config()
|
||||
|
||||
const app = express()
|
||||
|
|
@ -96,9 +91,6 @@ app.use((req, res, next) => {
|
|||
app.use('/api/posts', authenticateToken, postRoutes)
|
||||
app.use('/api/upload', authenticateToken, uploadRoutes)
|
||||
|
||||
// TEMPORARY: Serve static images (FOR TESTING ONLY - REMOVE IN PRODUCTION)
|
||||
app.use('/api/images', express.static(path.join(__dirname, 'images')))
|
||||
|
||||
// Health check
|
||||
app.get('/api/health', (req, res) => {
|
||||
res.json({ status: 'ok' })
|
||||
|
|
@ -135,6 +127,8 @@ async function performStartupChecks() {
|
|||
// 1. Check Database Connection
|
||||
console.log('📊 Checking Database Connection...')
|
||||
try {
|
||||
// Use improved connection test with better error messages
|
||||
const connectionTest = await testConnection()
|
||||
logger.db('SELECT', 'SELECT NOW(), version()', [])
|
||||
const dbResult = await pool.query('SELECT NOW(), version()')
|
||||
const dbTime = dbResult.rows[0].now
|
||||
|
|
@ -163,7 +157,35 @@ async function performStartupChecks() {
|
|||
} catch (error) {
|
||||
logger.error('DATABASE', 'Database connection failed', error)
|
||||
console.error(` ❌ Database connection failed: ${error.message}`)
|
||||
console.error(` 💡 Check your DATABASE_URL in .env file`)
|
||||
|
||||
// Provide specific guidance based on error code
|
||||
if (error.code === 'INVALID_PASSWORD' || error.message.includes('[YOUR-PASSWORD]')) {
|
||||
console.error(` 🔑 Placeholder password detected in DATABASE_URL`)
|
||||
console.error(` 💡 Replace [YOUR-PASSWORD] with your actual Supabase password`)
|
||||
console.error(` 💡 Format: postgresql://postgres.xxx:YOUR_ACTUAL_PASSWORD@aws-1-ap-south-1.pooler.supabase.com:5432/postgres`)
|
||||
} else if (error.code === 'AUTH_FAILED' || error.message.includes('password authentication failed') || error.message.includes('password')) {
|
||||
console.error(` 🔑 Authentication failed - Check your password in DATABASE_URL`)
|
||||
console.error(` 💡 Format: postgresql://user:password@host:port/database`)
|
||||
console.error(` 💡 Verify your Supabase password is correct`)
|
||||
} else if (error.code === 'CIRCUIT_BREAKER' || error.message.includes('Circuit breaker') || error.message.includes('too many')) {
|
||||
console.error(` 🔄 Too many failed attempts detected`)
|
||||
console.error(` 💡 ${error.message}`)
|
||||
console.error(` 💡 The testConnection function will automatically retry after a delay`)
|
||||
console.error(` 💡 If this persists, wait 30-60 seconds and restart the server`)
|
||||
console.error(` 💡 Verify your DATABASE_URL password is correct in .env`)
|
||||
} else if (error.code === 'HOST_ERROR' || error.message.includes('host') || error.message.includes('ENOTFOUND')) {
|
||||
console.error(` 🌐 Cannot reach database host - Check your DATABASE_URL hostname`)
|
||||
console.error(` 💡 Verify the hostname in your connection string is correct`)
|
||||
} else if (error.code === 'TIMEOUT' || error.message.includes('timeout')) {
|
||||
console.error(` ⏱️ Database connection timeout`)
|
||||
console.error(` 💡 Check your network connection and database accessibility`)
|
||||
} else if (error.code === 'INVALID_FORMAT') {
|
||||
console.error(` 📝 Invalid DATABASE_URL format`)
|
||||
console.error(` 💡 Expected: postgresql://user:password@host:port/database`)
|
||||
} else {
|
||||
console.error(` 💡 Check your DATABASE_URL in .env file`)
|
||||
console.error(` 💡 Format: postgresql://postgres.xxx:[PASSWORD]@aws-1-ap-south-1.pooler.supabase.com:5432/postgres`)
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
|
|
@ -178,11 +200,18 @@ async function performStartupChecks() {
|
|||
console.log(` ✅ AWS credentials configured`)
|
||||
console.log(` 🪣 S3 Bucket: ${BUCKET_NAME}`)
|
||||
console.log(` 🌍 AWS Region: ${process.env.AWS_REGION || 'us-east-1'}`)
|
||||
console.log(` 💡 Using bucket: ${BUCKET_NAME} in region: ${process.env.AWS_REGION || 'us-east-1'}`)
|
||||
|
||||
// Try to check bucket access (this might fail if bucket doesn't exist, but that's okay)
|
||||
// Try to check bucket access using ListObjectsV2 (only requires s3:ListBucket permission)
|
||||
// This is more compatible with minimal IAM policies
|
||||
if (s3Client) {
|
||||
try {
|
||||
await s3Client.send(new HeadBucketCommand({ Bucket: BUCKET_NAME }))
|
||||
// Use ListObjectsV2 with MaxKeys=0 to just check access without listing objects
|
||||
// This only requires s3:ListBucket permission (which matches your IAM policy)
|
||||
await s3Client.send(new ListObjectsV2Command({
|
||||
Bucket: BUCKET_NAME,
|
||||
MaxKeys: 0 // Don't actually list objects, just check access
|
||||
}))
|
||||
console.log(` ✅ S3 bucket is accessible`)
|
||||
} catch (s3Error) {
|
||||
if (s3Error.name === 'NotFound' || s3Error.$metadata?.httpStatusCode === 404) {
|
||||
|
|
@ -191,6 +220,12 @@ async function performStartupChecks() {
|
|||
} else if (s3Error.name === 'Forbidden' || s3Error.$metadata?.httpStatusCode === 403) {
|
||||
console.log(` ⚠️ S3 bucket access denied`)
|
||||
console.log(` 💡 Check IAM permissions for bucket: ${BUCKET_NAME}`)
|
||||
console.log(` 💡 Required permissions: s3:ListBucket, s3:PutObject, s3:GetObject`)
|
||||
console.log(` 💡 Common issues:`)
|
||||
console.log(` - Credentials in .env don't match IAM user with policy`)
|
||||
console.log(` - Policy not propagated yet (wait 2-3 minutes)`)
|
||||
console.log(` - Wrong region in AWS_REGION`)
|
||||
console.log(` 💡 See TROUBLESHOOT_S3_ACCESS.md for detailed troubleshooting`)
|
||||
} else {
|
||||
console.log(` ⚠️ S3 bucket check failed: ${s3Error.message}`)
|
||||
}
|
||||
|
|
@ -286,9 +321,32 @@ startServer().catch((error) => {
|
|||
process.exit(1)
|
||||
})
|
||||
|
||||
// Graceful shutdown
|
||||
// Graceful shutdown - important for hot reload to prevent connection pool exhaustion
|
||||
process.on('SIGTERM', async () => {
|
||||
console.log('SIGTERM signal received: closing HTTP server')
|
||||
await pool.end()
|
||||
console.log('SIGTERM signal received: closing HTTP server and database connections')
|
||||
try {
|
||||
await pool.end()
|
||||
console.log('✅ Database connections closed')
|
||||
} catch (error) {
|
||||
console.error('❌ Error closing database connections:', error.message)
|
||||
}
|
||||
process.exit(0)
|
||||
})
|
||||
|
||||
process.on('SIGINT', async () => {
|
||||
console.log('SIGINT signal received: closing HTTP server and database connections')
|
||||
try {
|
||||
await pool.end()
|
||||
console.log('✅ Database connections closed')
|
||||
} catch (error) {
|
||||
console.error('❌ Error closing database connections:', error.message)
|
||||
}
|
||||
process.exit(0)
|
||||
})
|
||||
|
||||
// Warning about running multiple apps with hot reload
|
||||
if (process.env.NODE_ENV !== 'production') {
|
||||
console.log('\n⚠️ Running in development mode with hot reload')
|
||||
console.log(' 💡 If running both blog-editor and api-v1, connection pools are reduced to prevent Supabase limits')
|
||||
console.log(' 💡 Consider running only one in hot reload mode if you hit connection limits\n')
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,90 @@
|
|||
import { S3Client, ListObjectsV2Command, PutObjectCommand } from '@aws-sdk/client-s3'
|
||||
import { getSignedUrl } from '@aws-sdk/s3-request-presigner'
|
||||
import dotenv from 'dotenv'
|
||||
|
||||
dotenv.config()
|
||||
|
||||
const bucketName = process.env.S3_BUCKET_NAME || process.env.AWS_BUCKET_NAME
|
||||
const region = process.env.AWS_REGION || 'ap-south-1'
|
||||
const accessKeyId = process.env.AWS_ACCESS_KEY_ID
|
||||
const secretAccessKey = process.env.AWS_SECRET_ACCESS_KEY
|
||||
|
||||
console.log('\n🔍 S3 Access Diagnostic Test\n')
|
||||
console.log('Configuration:')
|
||||
console.log(` Bucket: ${bucketName || 'NOT SET'}`)
|
||||
console.log(` Region: ${region}`)
|
||||
console.log(` Access Key ID: ${accessKeyId ? accessKeyId.substring(0, 8) + '...' : 'NOT SET'}`)
|
||||
console.log(` Secret Key: ${secretAccessKey ? '***SET***' : 'NOT SET'}\n`)
|
||||
|
||||
if (!bucketName) {
|
||||
console.error('❌ Bucket name not configured!')
|
||||
console.error(' Set S3_BUCKET_NAME or AWS_BUCKET_NAME in .env')
|
||||
process.exit(1)
|
||||
}
|
||||
|
||||
if (!accessKeyId || !secretAccessKey) {
|
||||
console.error('❌ AWS credentials not configured!')
|
||||
console.error(' Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in .env')
|
||||
process.exit(1)
|
||||
}
|
||||
|
||||
const client = new S3Client({
|
||||
region: region,
|
||||
credentials: {
|
||||
accessKeyId: accessKeyId,
|
||||
secretAccessKey: secretAccessKey,
|
||||
},
|
||||
})
|
||||
|
||||
console.log('Testing S3 access...\n')
|
||||
|
||||
// Test 1: ListBucket (s3:ListBucket permission)
|
||||
console.log('1️⃣ Testing ListBucket (s3:ListBucket permission)...')
|
||||
try {
|
||||
const listCommand = new ListObjectsV2Command({
|
||||
Bucket: bucketName,
|
||||
MaxKeys: 0 // Just check access, don't list objects
|
||||
})
|
||||
await client.send(listCommand)
|
||||
console.log(' ✅ SUCCESS - ListBucket works!')
|
||||
} catch (error) {
|
||||
console.error(` ❌ FAILED - ${error.name}`)
|
||||
console.error(` Message: ${error.message}`)
|
||||
if (error.name === 'Forbidden' || error.$metadata?.httpStatusCode === 403) {
|
||||
console.error('\n 💡 This means:')
|
||||
console.error(' - Your IAM user does NOT have s3:ListBucket permission')
|
||||
console.error(' - OR credentials don\'t match the IAM user with the policy')
|
||||
console.error(' - OR policy is not attached to the IAM user')
|
||||
} else if (error.name === 'NotFound') {
|
||||
console.error('\n 💡 Bucket not found - check bucket name and region')
|
||||
}
|
||||
process.exit(1)
|
||||
}
|
||||
|
||||
// Test 2: Generate Presigned URL (s3:PutObject permission)
|
||||
console.log('\n2️⃣ Testing Presigned URL generation (s3:PutObject permission)...')
|
||||
try {
|
||||
const putCommand = new PutObjectCommand({
|
||||
Bucket: bucketName,
|
||||
Key: 'test/test-file.txt',
|
||||
ContentType: 'text/plain'
|
||||
})
|
||||
const presignedUrl = await getSignedUrl(client, putCommand, { expiresIn: 60 })
|
||||
console.log(' ✅ SUCCESS - Presigned URL generated!')
|
||||
console.log(` URL: ${presignedUrl.substring(0, 80)}...`)
|
||||
} catch (error) {
|
||||
console.error(` ❌ FAILED - ${error.name}`)
|
||||
console.error(` Message: ${error.message}`)
|
||||
if (error.name === 'Forbidden' || error.$metadata?.httpStatusCode === 403) {
|
||||
console.error('\n 💡 This means:')
|
||||
console.error(' - Your IAM user does NOT have s3:PutObject permission')
|
||||
console.error(' - OR credentials don\'t match the IAM user with the policy')
|
||||
}
|
||||
process.exit(1)
|
||||
}
|
||||
|
||||
console.log('\n✅ All tests passed! Your S3 configuration is working correctly.')
|
||||
console.log('\n💡 If the backend still shows "access denied", try:')
|
||||
console.log(' 1. Restart the backend server')
|
||||
console.log(' 2. Wait 1-2 minutes for IAM changes to propagate')
|
||||
console.log(' 3. Verify credentials in .env match the IAM user with your policy\n')
|
||||
|
|
@ -35,46 +35,7 @@ export default function Editor({ content, onChange, onImageUpload }) {
|
|||
|
||||
toast.loading('Uploading image...', { id: 'image-upload' })
|
||||
|
||||
// TEMPORARY: Use local upload for testing (REMOVE IN PRODUCTION)
|
||||
// TODO: Remove this and use S3 upload instead
|
||||
let imageUrl
|
||||
try {
|
||||
const formData = new FormData()
|
||||
formData.append('image', file)
|
||||
|
||||
console.log('Uploading image locally (TEMPORARY):', {
|
||||
filename: file.name,
|
||||
size: file.size,
|
||||
type: file.type
|
||||
})
|
||||
|
||||
const response = await api.post('/upload/local', formData, {
|
||||
headers: {
|
||||
'Content-Type': 'multipart/form-data',
|
||||
},
|
||||
})
|
||||
|
||||
// Get full URL (backend serves images at /api/images/)
|
||||
const baseUrl = import.meta.env.VITE_API_URL || 'http://localhost:5001'
|
||||
imageUrl = `${baseUrl}${response.data.imageUrl}`
|
||||
|
||||
console.log('Local upload successful:', {
|
||||
imageUrl,
|
||||
filename: response.data.filename
|
||||
})
|
||||
} catch (error) {
|
||||
console.error('Local upload failed:', error)
|
||||
if (error.code === 'ERR_NETWORK' || error.message === 'Network Error') {
|
||||
throw new Error('Cannot connect to server. Make sure the backend is running.')
|
||||
}
|
||||
if (error.response?.status === 401) {
|
||||
throw new Error('Authentication failed. Please login again.')
|
||||
}
|
||||
throw new Error(error.response?.data?.message || error.message || 'Failed to upload image')
|
||||
}
|
||||
|
||||
/* ORIGINAL S3 UPLOAD CODE (COMMENTED OUT FOR TESTING)
|
||||
// Get presigned URL
|
||||
// Get presigned URL from backend
|
||||
let data
|
||||
try {
|
||||
const response = await api.post('/upload/presigned-url', {
|
||||
|
|
@ -96,7 +57,7 @@ export default function Editor({ content, onChange, onImageUpload }) {
|
|||
throw error
|
||||
}
|
||||
|
||||
// Upload to S3
|
||||
// Upload to S3 using presigned URL
|
||||
console.log('Uploading to S3:', {
|
||||
uploadUrl: data.uploadUrl.substring(0, 100) + '...',
|
||||
imageUrl: data.imageUrl,
|
||||
|
|
@ -136,9 +97,8 @@ export default function Editor({ content, onChange, onImageUpload }) {
|
|||
imageUrl: data.imageUrl
|
||||
})
|
||||
|
||||
// Insert image in editor
|
||||
// Use the image URL from the presigned URL response
|
||||
const imageUrl = data.imageUrl
|
||||
*/
|
||||
editor.chain().focus().setImage({
|
||||
src: imageUrl,
|
||||
alt: file.name,
|
||||
|
|
@ -222,6 +182,17 @@ export default function Editor({ content, onChange, onImageUpload }) {
|
|||
}
|
||||
}, [editor])
|
||||
|
||||
// Update editor content when content prop changes
|
||||
useEffect(() => {
|
||||
if (editor && content !== undefined) {
|
||||
const currentContent = editor.getJSON()
|
||||
// Only update if content is actually different to avoid infinite loops
|
||||
if (JSON.stringify(currentContent) !== JSON.stringify(content)) {
|
||||
editor.commands.setContent(content || '')
|
||||
}
|
||||
}
|
||||
}, [content, editor])
|
||||
|
||||
if (!editor) {
|
||||
return null
|
||||
}
|
||||
|
|
|
|||
Loading…
Reference in New Issue