Handle File and Image Uploads in Web Apps — The Right Way
File uploads done right: client-side validation, Multer middleware, S3 presigned URLs, image optimization, progress tracking, and security hardening.
File uploads are one of those features that seems straightforward until you're debugging why a 200MB video crashed your server, why uploaded images aren't displaying, or why someone uploaded a PHP shell disguised as a JPEG.
Most tutorials show you Multer with dest: 'uploads/' and call it done. That's fine for a homework assignment. For anything that touches real users, you need validation, size limits, content type verification, cloud storage, and progress tracking.
The Architecture Decision
There are two fundamentally different approaches to file uploads:
| Approach | How it works | Best for |
|---|---|---|
| Server relay | File goes to your server, server forwards to storage | Small files, simple apps, need server-side processing |
| Direct upload | File goes directly from browser to cloud storage (S3) | Large files, high traffic, minimal server load |
Method 1: Server Relay with Multer
npm install multer
Basic Setup
const express = require("express");
const multer = require("multer");
const path = require("path");
const crypto = require("crypto");
// Configure storage
const storage = multer.diskStorage({
destination: "uploads/",
filename: (req, file, cb) => {
// Generate unique filename to prevent overwrites
const uniqueName = crypto.randomUUID() + path.extname(file.originalname);
cb(null, uniqueName);
},
});
// Configure upload middleware
const upload = multer({
storage,
limits: {
fileSize: 10 1024 1024, // 10 MB max
},
fileFilter: (req, file, cb) => {
const allowedTypes = ["image/jpeg", "image/png", "image/webp", "image/gif"];
if (allowedTypes.includes(file.mimetype)) {
cb(null, true);
} else {
cb(new Error(File type ${file.mimetype} not allowed));
}
},
});
const app = express();
// Single file upload
app.post("/api/upload", upload.single("file"), (req, res) => {
if (!req.file) {
return res.status(400).json({ error: "No file uploaded" });
}
res.json({
filename: req.file.filename,
originalName: req.file.originalname,
size: req.file.size,
url: /uploads/${req.file.filename},
});
});
// Multiple files
app.post("/api/upload-multiple", upload.array("files", 5), (req, res) => {
res.json({
files: req.files.map((f) => ({
filename: f.filename,
size: f.size,
url: /uploads/${f.filename},
})),
});
});
// Error handling — Multer errors need special handling
app.use((err, req, res, next) => {
if (err instanceof multer.MulterError) {
if (err.code === "LIMIT_FILE_SIZE") {
return res.status(413).json({ error: "File too large. Max 10MB." });
}
return res.status(400).json({ error: err.message });
}
if (err.message?.includes("not allowed")) {
return res.status(415).json({ error: err.message });
}
next(err);
});
The Filename Trap
Never use the original filename for storage. Reasons:
- Two users upload
photo.jpg— second overwrites first - Filenames can contain path traversal:
../../etc/passwd - Filenames with special characters break URLs and file systems
- Predictable filenames enable enumeration attacks
Content Type Verification
The mimetype from Multer comes from the Content-Type header, which the client controls. An attacker can upload a malicious file with a image/jpeg content type. For real security, verify the actual file content:
const { fileTypeFromBuffer } = require("file-type");
app.post("/api/upload-secure", upload.single("file"), async (req, res) => {
if (!req.file) {
return res.status(400).json({ error: "No file" });
}
// Read first bytes to detect actual file type
const fs = require("fs").promises;
const buffer = await fs.readFile(req.file.path);
const type = await fileTypeFromBuffer(buffer);
const allowedMimes = ["image/jpeg", "image/png", "image/webp", "image/gif"];
if (!type || !allowedMimes.includes(type.mime)) {
// Delete the uploaded file
await fs.unlink(req.file.path);
return res.status(415).json({
error: "Invalid file type. Only JPEG, PNG, WebP, and GIF allowed.",
});
}
res.json({ url: /uploads/${req.file.filename} });
});
Method 2: Direct Upload to S3 with Presigned URLs
This is the production pattern. The flow:
- Frontend asks your server for a presigned upload URL
- Server generates a signed S3 URL (valid for a few minutes)
- Frontend uploads the file directly to S3
- Frontend tells your server the upload is complete
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
Backend: Generate Presigned URL
const { S3Client, PutObjectCommand } = require("@aws-sdk/client-s3");
const { getSignedUrl } = require("@aws-sdk/s3-request-presigner");
const crypto = require("crypto");
const s3 = new S3Client({
region: process.env.AWS_REGION,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
},
});
app.post("/api/upload/presign", async (req, res) => {
const { filename, contentType } = req.body;
// Validate content type
const allowedTypes = ["image/jpeg", "image/png", "image/webp", "application/pdf"];
if (!allowedTypes.includes(contentType)) {
return res.status(415).json({ error: "File type not allowed" });
}
// Generate a unique key
const ext = filename.split(".").pop();
const key = uploads/${crypto.randomUUID()}.${ext};
const command = new PutObjectCommand({
Bucket: process.env.S3_BUCKET,
Key: key,
ContentType: contentType,
});
const signedUrl = await getSignedUrl(s3, command, { expiresIn: 300 }); // 5 min
res.json({
uploadUrl: signedUrl,
fileUrl: https://${process.env.S3_BUCKET}.s3.amazonaws.com/${key},
key,
});
});
Frontend: Upload with Progress
async function uploadFile(file, onProgress) {
// Step 1: Get presigned URL
const response = await fetch("/api/upload/presign", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
filename: file.name,
contentType: file.type,
}),
});
const { uploadUrl, fileUrl } = await response.json();
// Step 2: Upload directly to S3
return new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
xhr.upload.addEventListener("progress", (e) => {
if (e.lengthComputable) {
const percent = Math.round((e.loaded / e.total) * 100);
onProgress?.(percent);
}
});
xhr.addEventListener("load", () => {
if (xhr.status >= 200 && xhr.status < 300) {
resolve(fileUrl);
} else {
reject(new Error(Upload failed: ${xhr.status}));
}
});
xhr.addEventListener("error", () => reject(new Error("Upload failed")));
xhr.open("PUT", uploadUrl);
xhr.setRequestHeader("Content-Type", file.type);
xhr.send(file);
});
}
Why XMLHttpRequest instead of fetch? Because fetch doesn't support upload progress events. There's no way to track upload percentage with the Fetch API. XHR is the correct tool here.
React Upload Component
function FileUploader() {
const [progress, setProgress] = useState(0);
const [uploading, setUploading] = useState(false);
const [previewUrl, setPreviewUrl] = useState(null);
const handleFileChange = async (e) => {
const file = e.target.files[0];
if (!file) return;
// Client-side validation
if (file.size > 10 1024 1024) {
alert("File must be under 10MB");
return;
}
// Show preview for images
if (file.type.startsWith("image/")) {
setPreviewUrl(URL.createObjectURL(file));
}
setUploading(true);
setProgress(0);
try {
const url = await uploadFile(file, setProgress);
console.log("Uploaded to:", url);
} catch (err) {
console.error("Upload failed:", err);
} finally {
setUploading(false);
}
};
return (
<div>
<input
type="file"
accept="image/*,.pdf"
onChange={handleFileChange}
disabled={uploading}
/>
{previewUrl && <img src={previewUrl} alt="Preview" style={{ maxWidth: 200 }} />}
{uploading && (
<div className="progress-bar">
<div style={{ width: ${progress}% }}>{progress}%</div>
</div>
)}
</div>
);
}
Image Optimization
Accepting raw uploads and serving them directly is wasteful. A 4000x3000 JPEG from a phone camera is 5-8 MB. You should resize and convert:
const sharp = require("sharp");
async function optimizeImage(inputPath, outputPath, options = {}) {
const { maxWidth = 1920, maxHeight = 1080, quality = 80 } = options;
await sharp(inputPath)
.resize(maxWidth, maxHeight, {
fit: "inside", // Maintain aspect ratio
withoutEnlargement: true, // Don't upscale small images
})
.webp({ quality }) // Convert to WebP
.toFile(outputPath);
}
Or generate multiple sizes for responsive images:
async function generateThumbnails(inputPath, baseKey) {
const sizes = [
{ name: "thumb", width: 150, height: 150 },
{ name: "medium", width: 800, height: 600 },
{ name: "large", width: 1920, height: 1080 },
];
const results = {};
for (const size of sizes) {
const outputPath = uploads/${baseKey}-${size.name}.webp;
await sharp(inputPath)
.resize(size.width, size.height, { fit: "cover" })
.webp({ quality: 80 })
.toFile(outputPath);
results[size.name] = outputPath;
}
return results;
}
Security Checklist
| Threat | Mitigation |
|---|---|
| Unrestricted file size | Set limits.fileSize in Multer, S3 presigned URL conditions |
| Malicious file types | Verify content with magic bytes, not just extension |
| Path traversal | Generate random filenames, never use user input in paths |
| Denial of service | Rate limit upload endpoints |
| Serving uploaded files as HTML | Set Content-Disposition: attachment and X-Content-Type-Options: nosniff headers |
| Missing auth | Require authentication before generating presigned URLs |
| Abandoned uploads | Set S3 lifecycle rules to delete incomplete multipart uploads |
S3 Bucket Configuration
Your bucket should not be publicly readable by default. Serve files through CloudFront with signed URLs, or set specific object ACLs:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your-bucket/uploads/*"
}
]
}
For sensitive files (user documents, private photos), don't make the bucket public. Generate presigned download URLs with expiration:
const { GetObjectCommand } = require("@aws-sdk/client-s3");
async function getDownloadUrl(key) {
const command = new GetObjectCommand({
Bucket: process.env.S3_BUCKET,
Key: key,
});
return getSignedUrl(s3, command, { expiresIn: 3600 }); // 1 hour
}
Common Mistakes
- Storing files on the server filesystem in production. Your server can restart, scale horizontally, or get replaced. Files on disk disappear. Always use cloud storage (S3, GCS, R2) for anything persistent.
- Not setting size limits. Without limits, a single request can exhaust your server's memory or disk. Set limits at every layer: Multer, Nginx, your reverse proxy.
- Trusting file extensions.
malware.exerenamed toprofile.jpgstill runs as an executable if someone downloads and opens it. Validate content, not names.
- Blocking the event loop with synchronous image processing. Sharp is async by default. If you're using other libraries, make sure processing runs in a worker thread for large files.
- No cleanup for failed uploads. If your app crashes between receiving a file and saving the record to your database, you get orphaned files. Run a periodic cleanup job.