The boilerplate includes a complete S3 file storage implementation that works with AWS S3, Cloudflare R2, MinIO, and other S3-compatible services. It provides secure file uploads and downloads using signed URLs.
Configure your S3-compatible storage provider in .env:
# S3-compatible storage
S3_ENDPOINT=https://your-s3-endpoint.com
S3_REGION=auto
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_BUCKET=your-bucket-name
AWS S3:
S3_ENDPOINT=https://s3.amazonaws.com
S3_REGION=us-east-1
Cloudflare R2:
S3_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
S3_REGION=auto
MinIO:
S3_ENDPOINT=https://minio.yourdomain.com
S3_REGION=us-east-1
The implementation consists of three layers of services:
server/services/storage-server-service.ts - High-level storage operations:
// Get signed URL for uploading a file
await getSignedUploadUrl(key, contentType, bucket?)
// Get signed URL for downloading a file
await getSignedDownloadUrl(key, expiresIn?, filename?, bucket?)
server/utils/signed-url.ts - Low-level signed URL generation:
// Generate signed upload URL (default: 2 minutes expiry)
await getSignedUploadUrl(s3Client, bucket, key, contentType, expiresIn?)
// Generate signed download URL (default: 1 hour expiry)
await getSignedDownloadUrl(s3Client, bucket, key, expiresIn?, filename?)
app/services/storage-client-service.ts - Client API for file operations:
// Upload file with progress tracking
await uploadFileWithSignedUrl(file, contentType, onProgress?, customKey?)
// Download file (triggers browser download)
await downloadFileWithSignedUrl(key, filename?)
// Generate unique storage key
generateStorageKey(filename?)
The boilerplate uses signed URLs for secure, temporary access:
Upload flow:
Download flow:
For public files (avatars, product images, etc.), you can use public buckets or CDN URLs:
const publicUrl = `https://cdn.yourdomain.com/${key}`
Public URLs don't require server-side URL generation but sacrifice access control.
Files are tracked in the database with the File model:
model File {
id String @id @default(dbgenerated("gen_random_uuid()")) @db.Uuid
key String @unique
name String
mimeType String
size BigInt
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
userId String @db.Uuid
user User @relation(fields: [userId], references: [id], onDelete: Cascade)
@@map("file")
}
Key points:
key is the S3 object key (unique identifier in bucket)size uses BigInt for large filesimport { uploadFileWithSignedUrl } from '@/services/storage-client-service'
import { createFile } from '@/services/files-client-service'
const file = event.target.files[0]
// Upload to S3
const { key } = await uploadFileWithSignedUrl(
file,
file.type,
(progress) => console.log(`${progress}% uploaded`)
)
// Save metadata
await createFile({
key,
name: file.name,
size: file.size,
mimeType: file.type,
userId: currentUser.id,
})
import { downloadFileWithSignedUrl } from '@/services/storage-client-service'
// Triggers browser download
await downloadFileWithSignedUrl(file.key, file.name)
For a full implementation example with drag-and-drop uploads, file tables, and metadata management, see the S3 file upload/download template.