A deep dive into implementing enterprise-level file uploads with S3 multipart upload, encryption, role-based access control, and comprehensive monitoring using Node.js and TypeScript
Most file upload systems are built as an afterthought. You set up a simple endpoint, add some validation for file size, and consider the job done. But when the requirements involve sensitive data, multi-gigabyte files, and enterprise-grade security, that bare-bones approach falls apart.
I recently set out to build a system that could reliably handle everything from small text files to multi-gigabyte archives while enforcing strong security, monitoring, and role-based access control. This isnβt just another βupload to S3β tutorial. Itβs about solving real problems that traditional upload systems often ignore.
Large file uploads tend to fail or time out. Weak or missing access controls can expose private data to the wrong users. And without encryption or logging, files in storage or transit are vulnerable targets, with no way to trace misuse if it happens.
To address these gaps, I turned to AWS S3βs multipart upload feature, which allows large files to be broken into smaller, more manageable pieces for upload....making the process more reliable, even over unstable networks. Combined with services like Cognito for authentication, IAM for fine-grained permissions, KMS for encryption, and CloudWatch for monitoring, I was able to design a secure, scalable file upload system that directly tackles these real-world challenges.
This system integrates seven AWS services to create something that actually works in production:
The Node.js/TypeScript backend handles the coordination between these services, providing a clean API interface while managing the complexity underneath.
Most systems either go too using a simple auth system or rolling their own Auth which is acceptable. My system uses AWS Cognito User Pools, which gives you enterprise features without the maintenance overhead.
The authentication flow looks like this:
// POST /auth/login
{
"email": "admin@example.com",
"password": "SecurePass123!"
}
// Response includes role information
{
"success": true,
"data": {
"token": "jwt_token_here",
"user": {
"email": "admin@example.com",
"role": {
"role": "admin",
"roleArn": "arn:aws:iam::account:role/SecureUpload-AdminRole"
}
}
}
}
The JWT token contains the user's role, which the backend uses to generate temporary AWS credentials via STS AssumeRole. This means users never get permanent AWS access keys β their credentials expire after one hour, limiting the blast radius if something goes wrong.
Rather than all-or-nothing access, the system implements three distinct roles with specific permissions:
Admin Role β Full system access including user management and system monitoring. Can upload, download, delete any file, plus manage other users and view CloudWatch metrics.
Uploader Role β Can upload new files and manage their own uploads. This role can generate presigned URLs for uploads, complete multipart uploads, and list their own files, but can't delete files uploaded by others.
Viewer Role β Read-only access to files they're authorized to see. Can generate presigned download URLs and list files, but can't upload or modify anything.
All these roles have IAM roles and policies carefully crafted to fit these. For example, the Uploader role policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectTagging",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::project3tmp/*",
"arn:aws:s3:::project3tmp"
]
},
{
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": "arn:aws:kms:eu-north-1:account:key/your-kms-key-id"
}
]
}
Here's where the system gets interesting β instead of generating generic presigned URLs, the system creates them based on the authenticated user's specific role and permissions. This happens through a careful orchestration of AWS STS and IAM policies.
The system provides two distinct endpoints for different upload scenarios:
/files/upload-url) - For files under 100MB/files/multipart/initiate) - For files larger than 5MBThe client determines which endpoint to use based on file size, but both follow the same security pattern.
When a user requests an upload URL, the backend:
// From the actual FileService implementation
async generateUploadUrl(user: any, uploadRequest: fileUploadRequest): Promise<fileUploadResponse> {
// First, assume the user's IAM role
const tempCredentials = await this.assumeUserRole(user);
// Create S3 client with temporary credentials
const tempS3Client = new S3Client({
region: config.s3.region,
credentials: {
accessKeyId: tempCredentials.accessKeyId,
secretAccessKey: tempCredentials.secretAccessKey,
sessionToken: tempCredentials.sessionToken
}
});
const fileId = crypto.randomUUID();
const cleanFileName = uploadRequest.fileName.replace(/[^a-zA-Z0-9.\-_]/g,'_');
const timestamp = new Date().toISOString().replace(/[:.]/g,'-');
const s3Key = `uploads/${timestamp}/${fileId}_${cleanFileName}`;
// Generate presigned URL with role-specific permissions
const putObjectCommand = new PutObjectCommand({
Bucket: config.s3.bucketName,
Key: s3Key,
ContentType: uploadRequest.contentType,
ServerSideEncryption: 'aws:kms',
SSEKMSKeyId: KMS_KEY_ID,
Metadata: {
'uploaded-by': user.email,
'user-id': user.userId,
'file-id': fileId,
'user-role': user.role.role,
'original-name': uploadRequest.fileName,
'file-size': uploadRequest.fileSize.toString(),
...uploadRequest.metadata
}
});
const uploadUrl = await getSignedUrl(tempS3Client, putObjectCommand, { expiresIn: 900 });
return { uploadUrl, fileId, expiresIn: 900, s3Key };
}
The assumeUserRole method is where the real security magic happens:
private async assumeUserRole(user: any): Promise<any> {
const roleArn = user.role.iamRoleArn; // e.g., 'arn:aws:iam::account:role/SecureUpload-UploaderRole'
const sessionName = `SecureUpload-${user.email.replace('@', '-')}-${Date.now()}`;
const assumeRoleCommand = new AssumeRoleCommand({
RoleArn: roleArn,
RoleSessionName: sessionName,
DurationSeconds: 3600 // 1 hour
});
const assumeRoleResponse = await this.stsClient.send(assumeRoleCommand);
return {
accessKeyId: assumeRoleResponse.Credentials.AccessKeyId!,
secretAccessKey: assumeRoleResponse.Credentials.SecretAccessKey!,
sessionToken: assumeRoleResponse.Credentials.SessionToken!,
expiration: assumeRoleResponse.Credentials.Expiration!
};
}
This means:
For large files, the multipart flow is more complex but follows the same security principles:
async initiateMultipartUpload(user: any, uploadRequest: MultipartUploadRequest): Promise<MultipartUploadResponse> {
// Same role assumption process...
const tempCredentials = await this.assumeUserRole(user);
const tempS3Client = new S3Client({
region: config.s3.region,
credentials: tempCredentials
});
// Create the multipart upload
const createCommand = new CreateMultipartUploadCommand({
Bucket: config.s3.bucketName,
Key: s3Key,
ContentType: uploadRequest.contentType,
ServerSideEncryption: 'aws:kms',
SSEKMSKeyId: KMS_KEY_ID,
// Same metadata and tagging as single uploads
});
const multipartResponse = await tempS3Client.send(createCommand);
const uploadId = multipartResponse.UploadId!;
// Generate presigned URLs for each part
const partSize = uploadRequest.partSize || 100 * 1024 * 1024; // 100MB default
const totalParts = Math.ceil(uploadRequest.fileSize / partSize);
const partUrls = [];
for (let partNumber = 1; partNumber <= totalParts; partNumber++) {
const uploadPartCommand = new UploadPartCommand({
Bucket: config.s3.bucketName,
Key: s3Key,
PartNumber: partNumber,
UploadId: uploadId
});
const partUrl = await getSignedUrl(tempS3Client, uploadPartCommand, {
expiresIn: 3600 // 1 hour for large uploads
});
partUrls.push({ partNumber, uploadUrl: partUrl });
}
return {
uploadId,
fileId,
partUrls,
totalParts,
expiresIn: 3600,
s3Key
};
}
The key insight is that even the multipart upload URLs are generated using the user's temporary credentials. If their role lacks multipart upload permissions, the whole process fails at the STS assume role step.
// Request upload URL
POST /files/upload-url
{
"fileName": "document.pdf",
"fileSize": 1048576,
"contentType": "application/pdf",
"metadata": {
"description": "Important document",
"category": "business"
}
}
// Get presigned URL response
{
"success": true,
"data": {
"uploadUrl": "https://project3tmp.s3.eu-north-1.amazonaws.com/...",
"fileId": "uuid-here",
"expiresIn": 900,
"s3Key": "uploads/2025-09-26/uuid_document.pdf"
}
}
// Large file multipart initialization
POST /files/multipart/initiate
{
"fileName": "large-file.zip",
"fileSize": 524288000, // ~500MB
"contentType": "application/zip",
"partSize": 104857600, // 100MB parts
"metadata": {
"description": "Large archive file"
}
}
// Returns presigned URLs for each part
{
"success": true,
"data": {
"uploadId": "multipart_upload_id",
"fileId": "uuid-here",
"partUrls": [
{
"partNumber": 1,
"uploadUrl": "https://s3-presigned-url-part-1"
},
// ... more parts
],
"totalParts": 5,
"expiresIn": 3600
}
}
To perform the actual upload, the client or user uses the presigned URLs provided in the response. For single uploads, they perform a simple PUT request to the uploadUrl. For multipart uploads, they upload each part using the corresponding uploadUrl for that part number.
The parameters passed into these uploads must match those specified during the presigned URL generation, including content type and any required headers.
The system implements defense in depth with multiple layers of encryption. All API communication uses HTTPS with TLS 1.2+, ensuring data in transit is protected.
For data at rest, every file is encrypted using AWS KMS with customer-managed keys. The S3 bucket is configured with mandatory encryption:
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: aws:kms
KMSMasterKeyID: !Ref UploadKMSKey
This means even if someone gains direct S3 access, they can't read the files without also having KMS permissions. The encryption is transparent to the application β S3 handles encryption on write and decryption on read automatically.
Production systems need visibility into what's happening. This system tracks detailed metrics across multiple dimensions:
Performance Metrics:
ApiRequestDuration - Response time for each endpointUploadSpeed - Actual file transfer ratesUploadDuration - End-to-end upload completion timeMultipartPartCount - Distribution of multipart upload complexityBusiness Metrics:
UploadCount - Success/failure rates by user roleFileSize - Distribution of uploaded file sizesApiRequestCount - Usage patterns by endpointSecurity Metrics:
All logs use structured JSON format for easy parsing:
{
"timestamp": "2025-09-26T10:30:00.000Z",
"level": "INFO",
"message": "Upload completed successfully",
"metadata": {
"fileId": "abc-123",
"fileName": "document.pdf",
"fileSize": 1048576,
"uploadSpeed": "2.5 MB/s",
"userEmail": "user@example.com",
"userRole": "uploader",
"duration": 412,
"s3Key": "uploads/2025-09-26/abc-123_document.pdf"
}
}
CloudWatch automatically aggregates these metrics and can trigger alerts when things go wrong β like API response times exceeding 5 seconds or upload failure rates spiking above normal levels.
After testing with files ranging from kilobytes to gigabytes, here's what the system actually delivers:
API response time targets are aggressive:
The system handles concurrent uploads gracefully, limited mainly by AWS service quotas rather than application bottlenecks.
This isn't just a technical exercise β it addresses real business problems:
Scalability: Handles everything from tiny documents to multi-gigabyte files without choking Security: Multi-layer encryption, temporary credentials, and audit trails satisfy enterprise security requirements Reliability: Multipart uploads mean large file uploads actually complete successfully Compliance: Comprehensive logging provides the audit trails required for regulated industries User Experience: Smart upload handling and progress tracking means users aren't left wondering what happened
The architecture patterns are reusable across different use cases β document management systems, media upload services, backup solutions, or any application that needs to handle file uploads at scale while maintaining security.
The codebase is structured as a proper enterprise application:
app/
βββ config/ # AWS service configurations
βββ controllers/ # Request handlers for auth and file operations
βββ middlewares/ # JWT validation, CloudWatch metrics
βββ routes/ # API route definitions
βββ services/ # Business logic for AWS integration
βββ utils/ # Logging and shared utilities
βββ server.ts # Express application entry point
Key dependencies include AWS SDK v3 for modern async/await support, Express.js for the web framework, and comprehensive TypeScript typing for maintainability.
This system proves that you can build something robust without sacrificing developer experience or operational simplicity. The AWS services handle the heavy lifting, the code focuses on business logic, and the result is a file upload system that actually works in production.
You can find the complete source code for this secure file upload system on GitHub: