@repo/storage provides a typed API for uploading and retrieving files from Supabase Storage via its S3-compatible endpoint. It uses @aws-sdk/lib-storage for multipart uploads (critical for large files like therapy session recordings) and falls back to the Supabase JS client if S3 fails. Both server-side and client-side (browser) entry points are provided.
The client entry point encapsulates all Supabase dependencies so that apps/web never needs to import @supabase/supabase-js directly.
graph TD
storage["@repo/storage"]
storage --> safe["@repo/safe"]
storage --> logger["@repo/logger"]
storage --> errors["@repo/errors"]
storage -.-> typescript_config["@repo/typescript-config"]
storage -.-> vitest_config["@repo/vitest-config"]| Import | Resolves to | Description |
|---|---|---|
@repo/storage |
src/index.ts |
Server-side: uploadFile, getPublicUrl, createSignedUrl, error classes, and all types |
@repo/storage/client |
src/client.ts |
Client-side (browser): getStorageConfig, uploadFile, getPublicUrl, StorageUploadError, and upload types |
The client entry point adds getStorageConfig which handles Supabase auth internally, and omits createSignedUrl (which should only be called server-side).
On the server, construct StorageConfig directly — you have access to env vars and the user's auth token via cookies.
import type { StorageConfig } from "@repo/storage";
const config: StorageConfig = {
supabaseUrl: process.env.SUPABASE_URL!,
supabaseAnonKey: process.env.SUPABASE_ANON_KEY!,
supabaseProjectRef: process.env.SUPABASE_PROJECT_REF!,
accessToken: authToken, // from the authenticated user's session
region: "us-east-1", // optional, defaults to "us-east-1"
};
Use getStorageConfig() to build a StorageConfig from the browser. It uses @supabase/ssr internally to read the auth session from cookies — your app code never touches Supabase.
import { getStorageConfig } from "@repo/storage/client";
const config = await getStorageConfig({
supabaseUrl: process.env.NEXT_PUBLIC_SUPABASE_URL!,
supabaseAnonKey: process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
});
Throws StorageUploadError if the user is not authenticated.
import { uploadFile } from "@repo/storage";
const result = await uploadFile(config, {
bucket: "session-recordings",
key: "recordings/user-42/patient-7/session-20260402-123.webm",
body: audioBlob,
contentType: "audio/webm",
});
if (result.error) {
console.error("Upload failed:", result.error.message);
return;
}
console.log(result.data.key); // "recordings/user-42/..."
console.log(result.data.publicUrl); // "https://…/storage/v1/object/public/session-recordings/recordings/user-42/…"
import { getStorageConfig, uploadFile } from "@repo/storage/client";
const config = await getStorageConfig({
supabaseUrl: process.env.NEXT_PUBLIC_SUPABASE_URL!,
supabaseAnonKey: process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
});
const result = await uploadFile(
config,
{
bucket: "session-recordings",
key: `recordings/user-${userId}/session-${Date.now()}.webm`,
body: recordedBlob,
contentType: "audio/webm",
},
{
onProgress: (progress) => {
// progress is 0–1
setUploadPercent(Math.round(progress * 100));
},
},
);
import { getPublicUrl } from "@repo/storage";
const url = getPublicUrl(config, "session-recordings", "recordings/user-42/file.webm");
// "https://…/storage/v1/object/public/session-recordings/recordings/user-42/file.webm"
Use this to generate a time-limited URL for private files (e.g. passing an audio file to a transcription service).
import { createSignedUrl } from "@repo/storage";
const result = await createSignedUrl(config, "session-recordings", "recordings/user-42/file.webm", 3600);
if (result.error) {
console.error("Failed to create signed URL:", result.error.message);
return;
}
console.log(result.data.signedUrl); // "https://…?token=…" (valid for 1 hour)
flowchart TD
A["uploadFile()"] --> B["S3 multipart upload\n(@aws-sdk/lib-storage)"]
B -->|Success| C["Return key + publicUrl"]
B -->|Failure| D["Supabase JS client fallback"]
D -->|Success| C
D -->|Failure| E["Return StorageUploadError"]The S3 client is configured to point at Supabase Storage's S3-compatible endpoint ({supabaseUrl}/storage/v1/s3). Authentication uses the Supabase project ref as the access key, the anon key as the secret, and the user's JWT as the session token.
App->>Storage: getStorageConfig({ url, anonKey }) Storage->>SSR: createBrowserClient(url, anonKey) Storage->>SSR: supabase.auth.getSession() Note over SSR: Reads sb-*-auth-token<br/>cookies set during login SSR-->>Storage: { session: { access_token } } Storage-->>App: StorageConfig App->>Storage: uploadFile(config, params) Note over Storage: S3 multipart upload<br/>to Supabase Storage
App->>Storage: getStorageConfig({ url, anonKey }) Storage->>SSR: createBrowserClient(url, anonKey) Storage->>SSR: supabase.auth.getSession() Note over SSR: Reads sb-*-auth-token<br/>cookies set during login SSR-->>Storage: { session: { access_token } } Storage-->>App: StorageConfig App->>Storage: uploadFile(config, params) Note over Storage: S3 multipart upload<br/>to Supabase Storage
sequenceDiagram
participant App as apps/web
participant Storage as "@repo/storage/client"
participant SSR as "@supabase/ssr"
App->>Storage: getStorageConfig({ url, anonKey })
Storage->>SSR: createBrowserClient(url, anonKey)
Storage->>SSR: supabase.auth.getSession()
Note over SSR: Reads sb-*-auth-token<br/>cookies set during login
SSR-->>Storage: { session: { access_token } }
Storage-->>App: StorageConfig
App->>Storage: uploadFile(config, params)
Note over Storage: S3 multipart upload<br/>to Supabase Storage
All errors extend CircleError with domain: "storage".
| Error | Code | When |
|---|---|---|
StorageUploadError |
STORAGE_UPLOAD_ERROR |
Upload failed, or user is not authenticated (client-side) |
StorageDownloadError |
STORAGE_DOWNLOAD_ERROR |
Failed to create a signed download URL |
Server-side functions (uploadFile, createSignedUrl) return Safe<T> from @repo/safe, so errors are returned as values rather than thrown. Client-side getStorageConfig() throws on failure since it's a precondition for uploads.
recordings/user-{id}/patient-{id}/session-{timestamp}.webm. Callers construct keys and pass them in.StorageConfig is passed to every function call. This works well because the accessToken changes per-user.@aws-sdk/lib-storage automatically splits large files into parts, which is critical for 30-90 minute therapy recordings that can be 50-200 MB.apps/web never imports @supabase/supabase-js or @supabase/ssr directly. The storage package handles auth session retrieval internally via getStorageConfig().| Script | Description |
|---|---|
test |
Run Vitest with coverage |
test:watch |
Run Vitest in watch mode |
check-types |
Typecheck with tsc --noEmit |