Storage
Storage Package (@repo/storage
)
A storage bucket package that connects to S3-compatible APIs for file storage.
Why You Need This
Whether you store user uploads or static assets, you need a simple abstraction over S3-compatible services. This package gives you consistent helpers and typed utilities. Review Getting Started to set up credentials.
Features
- Connect to any S3-compatible storage API (AWS S3, Backblaze B2, MinIO, etc.)
- Upload, download, and delete files
- List objects in buckets
- Generate pre-signed URLs for temporary access
- Supports prefixes for organizing files
- Helper utilities for file operations
Installation
This package is part of the monorepo and can be installed as a dependency in other packages:
{
"dependencies": {
"@repo/storage": "workspace:*"
}
}
Usage
Initialize the client
First, initialize the S3 client. This should typically be done once at application startup.
import { initializeS3Client, StorageBucket } from "@repo/storage";
// Initialize the S3 client
initializeS3Client({
endpoint: "https://s3.amazonaws.com", // or your S3-compatible endpoint
region: "us-east-1",
accessKey: "YOUR_ACCESS_KEY",
secretKey: "YOUR_SECRET_KEY",
forcePathStyle: false, // Set to true for MinIO, Backblaze B2, etc.
});
Using with different providers
The client can be configured for various S3-compatible providers:
AWS S3
initializeS3Client({
endpoint: "https://s3.amazonaws.com",
region: "us-east-1",
accessKey: process.env.AWS_ACCESS_KEY_ID || "",
secretKey: process.env.AWS_SECRET_ACCESS_KEY || "",
forcePathStyle: false,
});
MinIO
initializeS3Client({
endpoint: "http://minio-server:9000", // Your MinIO server URL
region: "us-east-1", // MinIO uses this as a default
accessKey: process.env.MINIO_ACCESS_KEY || "",
secretKey: process.env.MINIO_SECRET_KEY || "",
forcePathStyle: true, // Required for MinIO
});
Backblaze B2
initializeS3Client({
endpoint: "https://s3.us-west-001.backblazeb2.com", // Use the appropriate endpoint
region: "us-west-001", // Use your B2 region
accessKey: process.env.B2_APPLICATION_KEY_ID || "",
secretKey: process.env.B2_APPLICATION_KEY || "",
forcePathStyle: true,
});
DigitalOcean Spaces
initializeS3Client({
endpoint: "https://nyc3.digitaloceanspaces.com", // Use your region
region: "nyc3", // Match your region
accessKey: process.env.SPACES_ACCESS_KEY || "",
secretKey: process.env.SPACES_SECRET_KEY || "",
forcePathStyle: false,
});
Create a bucket instance
Once the client is initialized, create instances of StorageBucket
for specific buckets and optional prefixes.
// Create a bucket instance for user uploads
const userUploads = new StorageBucket({
name: "user-uploads", // Your actual bucket name
prefix: "users/", // Optional prefix for all objects in this instance
});
Upload a file
Upload files using the upload
method. The body
can be a Buffer
, Blob
, ReadableStream
, or string
.
// Upload a file
const key = await userUploads.upload({
key: "profile-image.jpg", // Relative key within the prefix (if any)
body: fileBuffer,
contentType: "image/jpeg",
metadata: { // Optional metadata
userId: "123",
uploadDate: new Date().toISOString(),
},
});
// Returns the full key (prefix + provided key) e.g., "users/profile-image.jpg"
Generate a signed URL
Generate temporary, pre-signed URLs for accessing objects using getSignedUrl
.
// Generate a pre-signed URL that expires in 1 hour
const signedUrl = await userUploads.getSignedUrl({
key: "profile-image.jpg",
expiresIn: 3600, // Expiration time in seconds
});
List objects
List objects within the bucket (respecting the instance's prefix) using listObjects
.
// List objects within the 'users/123/' prefix
const { keys, isTruncated, nextContinuationToken } = await userUploads.listObjects({
prefix: "users/123/", // Additional prefix filtering
maxKeys: 100,
});
Download a file
Retrieve an object's data using getObject
.
// Get an object
const { body, contentType, metadata } = await userUploads.getObject({
key: "profile-image.jpg",
});
// Stream the file data
if (body) {
// Process the ReadableStream (e.g., pipe to a file or response)
}
Delete a file
Delete an object using deleteObject
.
// Delete an object
await userUploads.deleteObject({
key: "profile-image.jpg",
});
Helper Utilities
The package provides several helper functions:
import {
generateUniqueKey,
getFileExtension,
getContentTypeFromExtension,
formatFileSize,
createStorageEventLogger
} from "@repo/storage";
// Generate a unique key with timestamp and random suffix
const key = generateUniqueKey("user-123/", ".jpg");
// Example: "user-123/1648218367832-a7b3c9d8.jpg"
// Get file extension
const ext = getFileExtension("document.pdf"); // => ".pdf"
// Get content type from extension
const contentType = getContentTypeFromExtension(".jpg"); // => "image/jpeg"
// Format file size into human-readable string
const size = formatFileSize(1048576); // => "1 MB"
// Create an event logger (useful for debugging or monitoring)
const logEvent = createStorageEventLogger();
logEvent("upload", { key: "file.jpg", size: 1048576 });
Usage with Next.js / React Router
Here's an example of how to handle file uploads in an API route (adaptable for React Router loaders/actions).
import { NextRequest, NextResponse } from "next/server"; // Or relevant React Router imports
import { initializeS3Client, StorageBucket, generateUniqueKey, getFileExtension } from "@repo/storage";
// Ensure client is initialized (e.g., in server startup)
// initializeS3Client(...);
const mediaBucket = new StorageBucket({
name: process.env.S3_BUCKET_NAME || "media",
prefix: "uploads/",
});
export async function POST(request: NextRequest) { // Or your action function
try {
const formData = await request.formData();
const file = formData.get("file") as File;
if (!file) {
return NextResponse.json({ error: "No file provided" }, { status: 400 });
}
const buffer = Buffer.from(await file.arrayBuffer());
const extension = getFileExtension(file.name);
// Generate a unique key to avoid collisions
const key = generateUniqueKey("", extension);
const uploadedKey = await mediaBucket.upload({
key,
body: buffer,
contentType: file.type,
metadata: {
originalName: file.name,
size: String(file.size),
},
});
// Optionally generate a signed URL for immediate access
const url = await mediaBucket.getSignedUrl({
key: uploadedKey,
expiresIn: 3600 * 24, // 24 hours
});
return NextResponse.json({ key: uploadedKey, url });
} catch (error) {
console.error("Upload error:", error);
return NextResponse.json({ error: "Upload failed" }, { status: 500 });
}
}
Note: Remember to configure the S3 credentials using environment variables in production.