fs
File System Operations in Node.js
LinkedIn Hook
"Why did your Node.js server suddenly stop responding to every user when one admin uploaded a 2 GB log file?"
The answer is one line of code:
fs.readFileSync('huge.log'). That single sync call froze the event loop for eight full seconds, blocking every concurrent request, every database query, every websocket heartbeat. The server didn't crash. It just stopped breathing.The
fsmodule is the most innocent-looking part of Node.js — and the most dangerous. It ships four different ways to read a file: synchronous, callback-async, promise-based, and streaming. Each has a specific use case, and choosing the wrong one is the difference between a server that scales to 10,000 users and one that falls over at 50.Most tutorials only teach
fs.readFile. Real Node.js engineers know when to reach forfs.promises, when to stream, when sync is actually fine (hint: startup config), and when watching a file is better than polling it.In Lesson 3.1, I break down all four styles, the modern
fs.promisesAPI, directory recursion, file stats, permissions, and the real-world patterns interviewers love asking about.Read the full lesson -> [link]
#NodeJS #Backend #FileSystem #JavaScript #InterviewPrep
What You'll Learn
- The four ways to read and write files in Node.js — sync, callback, promise, stream
- When each style is appropriate (and when it will destroy your server)
- The modern
fs.promisesAPI and why it's the new default - How to watch files for changes with
fs.watch(and whychokidarexists) - Recursive directory operations —
readdirandmkdirwithrecursive: true - Inspecting files with
fs.stat— size, mtime, isFile, isDirectory - Permissions —
fs.chmod,fs.access, and the right way to check existence - Real-world patterns: loading config at startup, rotating log files
The Filing Cabinet Analogy — Four Ways to Retrieve a Folder
Imagine your server is a busy office and the disk is a filing cabinet in another room. Every file request means someone has to walk over, open a drawer, and bring back a folder. You — the office manager — have four employees you can send.
Employee 1: The Freezer (sync). This employee walks to the cabinet and stands at your desk until they return. While they're gone, nobody else in the office can do anything. The phones ring unanswered. Customers wait. The whole office is paused. They're fast for a single trip, but if the file is huge or the cabinet is far away, the office grinds to a halt.
Employee 2: The Note-Taker (callback). This employee takes a sticky note from you saying "when you get the file, do this with it" and walks off. The office keeps running. When they come back, they read the note and execute it. Reliable, but if you ask them to do five errands in sequence, you end up with sticky notes pasted on top of sticky notes — the legendary "callback hell."
Employee 3: The Promise-Keeper (fs.promises). Same as the note-taker, but instead of sticky notes they hand you a numbered claim ticket. You can await the ticket whenever you're ready, chain it with other tickets, and use try/catch for errors. This is the modern default — clean, composable, exception-friendly.
Employee 4: The Conveyor Belt (streams). Instead of carrying the entire folder back at once, this employee sets up a conveyor belt that delivers the file page by page. You can start processing page 1 while page 47 is still being fetched. Memory usage stays tiny no matter how big the file is. This is how you handle 10 GB log files without exploding the heap.
+---------------------------------------------------------------+
| THE FOUR WAYS TO READ A FILE |
+---------------------------------------------------------------+
| |
| SYNC (fs.readFileSync) |
| [server] --BLOCKED--> [disk] --BLOCKED--> [server] |
| Event loop FROZEN. No other requests served. |
| Use only at: startup, CLI tools, build scripts. |
| |
| CALLBACK (fs.readFile) |
| [server] --request--> [disk] |
| | | |
| +-- handles other ---+ |
| | requests | |
| <----- callback -----+ |
| Non-blocking. Old style. Nested = callback hell. |
| |
| PROMISE (fs.promises.readFile) |
| const data = await fs.readFile('x.txt') |
| Non-blocking. Awaitable. Try/catch errors. MODERN DEFAULT. |
| |
| STREAM (fs.createReadStream) |
| [disk] --chunk--> [chunk] --chunk--> [chunk] --> [server] |
| Constant memory. Backpressure. Use for big files & pipes. |
| |
+---------------------------------------------------------------+
Napkin AI Visual Prompt: "Dark gradient (#0a1a0a -> #0d2e16). Four horizontal lanes labeled 'sync', 'callback', 'promise', 'stream'. The sync lane shows a frozen server icon (amber #ffb020 warning). The callback lane shows arrows looping back from disk. The promise lane shows a clean await arrow in Node green (#68a063). The stream lane shows a conveyor belt of small chunks flowing left to right with a 'constant memory' label. White monospace text throughout."
Style 1 — Synchronous (fs.readFileSync, fs.writeFileSync)
The simplest form. The function returns the file contents directly, or throws an error. The catch: it blocks the entire Node.js event loop until the disk operation finishes.
// app/start.js
// Synchronous file operations — the function returns or throws.
const fs = require('node:fs');
try {
// Read a file as a UTF-8 string. Blocks the event loop.
const config = fs.readFileSync('./config.json', 'utf8');
const parsed = JSON.parse(config);
console.log('Loaded config:', parsed.appName);
} catch (err) {
// Errors come back as thrown exceptions, so try/catch works directly.
console.error('Failed to load config:', err.message);
process.exit(1);
}
// Write a string synchronously. Also blocks the event loop.
fs.writeFileSync('./version.txt', 'v1.0.0\n', 'utf8');
When sync is actually fine:
- During application startup, before the server starts accepting requests. Nothing is waiting on the event loop yet, so blocking it costs nothing.
- In CLI tools and build scripts that do one thing and exit. There are no concurrent users to starve.
- For tiny files (under a few KB) where the operation completes in microseconds.
When sync is a disaster:
- Inside a request handler on a running HTTP server.
- Inside a websocket message handler.
- Anywhere the event loop is serving multiple clients concurrently.
Style 2 — Callback Async (fs.readFile, fs.writeFile)
The original Node.js API. The function returns immediately and calls your callback when the operation finishes. The callback follows the error-first convention: the first argument is an error (or null), the second is the result.
// callback-style.js
const fs = require('node:fs');
// Non-blocking read. The event loop keeps running while the disk works.
fs.readFile('./users.json', 'utf8', (err, data) => {
if (err) {
// Error-first callback convention: err is the first parameter.
console.error('Read failed:', err.message);
return;
}
const users = JSON.parse(data);
// Nesting starts here — write a derived file after reading.
fs.writeFile('./user-count.txt', String(users.length), (writeErr) => {
if (writeErr) {
console.error('Write failed:', writeErr.message);
return;
}
console.log('Wrote count:', users.length);
});
});
This style works, but chaining several operations creates the famous callback pyramid. Modern code rarely uses raw callbacks unless interfacing with very old APIs.
Style 3 — Promises (fs.promises) — The Modern Default
Since Node.js 10, the fs module ships a promise-based version under fs.promises (also importable as node:fs/promises). This is what you should reach for in all new code that runs inside the event loop.
// promise-style.js
// Import the promise-based API directly.
const fs = require('node:fs/promises');
async function syncUserCount() {
try {
// await pauses this function but NOT the event loop.
const data = await fs.readFile('./users.json', 'utf8');
const users = JSON.parse(data);
// Sequential awaits read top-to-bottom — no nesting.
await fs.writeFile('./user-count.txt', String(users.length));
console.log('Synced count:', users.length);
} catch (err) {
// A single try/catch handles both read and write failures.
console.error('syncUserCount failed:', err.message);
}
}
syncUserCount();
Why this is the modern default:
- Reads top to bottom like synchronous code, but doesn't block.
- Errors propagate through
try/catch— noif (err) returnboilerplate on every line. - Composes cleanly with
Promise.allfor parallel operations. - Works seamlessly with async iterators,
for await, and modern frameworks.
// Parallel reads — both files load at the same time, not one after the other.
const [config, secrets] = await Promise.all([
fs.readFile('./config.json', 'utf8'),
fs.readFile('./secrets.json', 'utf8'),
]);
Style 4 — Streams (fs.createReadStream, fs.createWriteStream)
The previous three styles all load the entire file into memory before you can use it. Try that with a 10 GB log file and your process will crash. Streams solve this by delivering the file in chunks — typically 64 KB at a time — and letting you process each chunk as it arrives.
// stream-style.js
const fs = require('node:fs');
const { pipeline } = require('node:stream/promises');
const zlib = require('node:zlib');
async function gzipLogFile() {
// Create a readable stream — the file is NOT loaded into memory.
const source = fs.createReadStream('./huge-app.log');
// Create a writable stream for the compressed output.
const destination = fs.createWriteStream('./huge-app.log.gz');
// Pipeline connects streams and handles errors + cleanup automatically.
// Data flows: disk -> source -> gzip -> destination -> disk
await pipeline(source, zlib.createGzip(), destination);
console.log('Compressed huge-app.log -> huge-app.log.gz');
}
gzipLogFile().catch((err) => console.error('Pipeline failed:', err));
Why streams matter:
- Constant memory. A 10 GB file uses the same RAM as a 10 KB file.
- Backpressure. If the destination is slow (network, disk), the source automatically slows down.
- Composability. Pipe through transforms like gzip, encryption, parsing.
- Time to first byte. You can start processing data before the whole file has been read.
Streams are the right answer whenever the file might be larger than available RAM, or when you're piping data between disk, network, and processing layers.
Watching Files for Changes
Sometimes you need to react when a file changes — config reloads, hot-reload dev servers, processing dropped files. Node.js ships fs.watch for this.
// watch-config.js
const fs = require('node:fs');
// Returns a Watcher object that emits 'change' and 'rename' events.
const watcher = fs.watch('./config.json', (eventType, filename) => {
console.log(`[${eventType}] ${filename} changed at ${new Date().toISOString()}`);
if (eventType === 'change') {
// Reload config in-memory. In production, debounce this — editors
// often fire multiple change events for a single save.
reloadConfig();
}
});
// Always close watchers on shutdown to free OS resources.
process.on('SIGINT', () => {
watcher.close();
process.exit(0);
});
function reloadConfig() {
const data = fs.readFileSync('./config.json', 'utf8');
console.log('Reloaded:', JSON.parse(data));
}
The honest truth about fs.watch: it's inconsistent across platforms. macOS, Linux, and Windows all behave slightly differently. Editors like VS Code save files atomically (write-to-temp + rename), which can fire multiple events or even invalidate the watcher. For anything beyond a toy script, use chokidar — a battle-tested third-party library that normalizes platform quirks, debounces events, and supports glob patterns.
// Production watching — npm install chokidar
const chokidar = require('chokidar');
chokidar
.watch('./src/**/*.js', { ignoreInitial: true })
.on('change', (path) => console.log(`Changed: ${path}`))
.on('add', (path) => console.log(`Added: ${path}`));
Directory Operations — readdir and mkdir (Recursive)
Modern Node.js makes recursive directory work trivial with the recursive: true option.
// directory-ops.js
const fs = require('node:fs/promises');
const path = require('node:path');
async function listAllJsFiles(root) {
// recursive: true walks every subdirectory.
// withFileTypes: true returns Dirent objects so we can check isFile().
const entries = await fs.readdir(root, {
recursive: true,
withFileTypes: true,
});
// Filter to .js files only and rebuild full paths.
return entries
.filter((e) => e.isFile() && e.name.endsWith('.js'))
.map((e) => path.join(e.parentPath, e.name));
}
async function ensureDir(dirPath) {
// recursive: true creates parent directories as needed,
// and does NOT throw if the directory already exists.
await fs.mkdir(dirPath, { recursive: true });
}
(async () => {
await ensureDir('./build/output/logs');
const files = await listAllJsFiles('./src');
console.log(`Found ${files.length} JS files`);
})();
Key options:
recursive: trueonreaddirwalks the entire tree (Node.js 18.17+).recursive: trueonmkdiris the modern equivalent ofmkdir -p.withFileTypes: truereturnsDirentobjects withisFile(),isDirectory(), andparentPath.
File Stats — fs.stat
fs.stat returns metadata about a file: size, modification time, type, permissions.
// stats.js
const fs = require('node:fs/promises');
async function describeFile(filePath) {
const stats = await fs.stat(filePath);
return {
sizeBytes: stats.size, // size in bytes
isFile: stats.isFile(), // regular file?
isDirectory: stats.isDirectory(), // directory?
isSymlink: stats.isSymbolicLink(), // symlink?
modifiedAt: stats.mtime, // last write
accessedAt: stats.atime, // last read
createdAt: stats.birthtime, // creation time
mode: stats.mode.toString(8), // permission bits in octal
};
}
describeFile('./package.json').then(console.log);
A common pattern: rotate a log file once it exceeds a size threshold (more on that below).
Permissions — chmod and access
// permissions.js
const fs = require('node:fs/promises');
async function makeExecutable(scriptPath) {
// Octal 0o755 = rwxr-xr-x (owner can write+execute, others can read+execute).
await fs.chmod(scriptPath, 0o755);
}
async function canRead(filePath) {
try {
// fs.access throws if the check fails. Use constants for the mode.
await fs.access(filePath, fs.constants.R_OK);
return true;
} catch {
return false;
}
}
Important: do NOT use fs.access to check existence before reading. It creates a TOCTOU (time-of-check-to-time-of-use) race condition — the file could be deleted between the check and the read. Just attempt the read and handle ENOENT in the catch block.
// WRONG — race condition
if (await canRead('./data.json')) {
const data = await fs.readFile('./data.json', 'utf8'); // file might be gone now
}
// RIGHT — try and handle the error
try {
const data = await fs.readFile('./data.json', 'utf8');
} catch (err) {
if (err.code === 'ENOENT') {
// File doesn't exist — handle gracefully
} else {
throw err;
}
}
Real-World Pattern 1 — Loading Config at Startup
Startup is the one place where synchronous file reads are correct. The server isn't accepting traffic yet, so blocking the event loop costs nothing — and using sync code keeps your bootstrap simple and lets you fail fast before any other module initializes.
// config.js
const fs = require('node:fs');
const path = require('node:path');
function loadConfig() {
// Sync is appropriate here — this runs once, before the server starts.
const configPath = path.join(__dirname, 'config.json');
let raw;
try {
raw = fs.readFileSync(configPath, 'utf8');
} catch (err) {
if (err.code === 'ENOENT') {
console.error(`Missing config file: ${configPath}`);
} else {
console.error(`Cannot read config: ${err.message}`);
}
process.exit(1); // Fail fast — never start with broken config.
}
let parsed;
try {
parsed = JSON.parse(raw);
} catch (err) {
console.error(`Invalid JSON in config: ${err.message}`);
process.exit(1);
}
// Validate required fields and freeze so nothing mutates it later.
if (!parsed.port || !parsed.databaseUrl) {
console.error('Config missing required fields: port, databaseUrl');
process.exit(1);
}
return Object.freeze(parsed);
}
// Export the loaded, validated, frozen config.
module.exports = loadConfig();
Real-World Pattern 2 — Rotating a Log File
A long-running server writing to a single log file will eventually fill the disk. The classic solution: when the log exceeds a size threshold, rename it with a timestamp and start fresh.
// rotating-logger.js
const fs = require('node:fs/promises');
const path = require('node:path');
const LOG_PATH = './app.log';
const MAX_BYTES = 5 * 1024 * 1024; // 5 MB
async function appendLog(line) {
// Check current size. If the file doesn't exist yet, treat size as 0.
let size = 0;
try {
const stats = await fs.stat(LOG_PATH);
size = stats.size;
} catch (err) {
if (err.code !== 'ENOENT') throw err;
}
// Rotate if we'd exceed the threshold after this write.
if (size + Buffer.byteLength(line) > MAX_BYTES) {
const stamp = new Date().toISOString().replace(/[:.]/g, '-');
const rotated = path.join(
path.dirname(LOG_PATH),
`app-${stamp}.log`
);
// Atomic rename — readers of the old path see the rotated file instantly.
await fs.rename(LOG_PATH, rotated);
}
// Append the line. 'a' flag = append, creates the file if missing.
await fs.appendFile(LOG_PATH, line + '\n');
}
module.exports = { appendLog };
For higher-volume logging, use a streaming logger like pino with its rotating transport — but the principle above is what's happening under the hood.
Common Mistakes
1. Using sync file operations inside a request handler.
The single most common Node.js performance bug. fs.readFileSync inside an HTTP handler blocks the event loop for every concurrent request, not just the one that called it. A 50 ms disk read becomes 50 ms of frozen latency for every user on the server. Use fs.promises everywhere except startup.
2. Loading huge files entirely into memory with readFile.
fs.readFile returns the whole file as a single Buffer or string. For a 2 GB file, that's 2 GB of RAM — and Node.js will throw ERR_FS_FILE_TOO_LARGE for files over ~2 GB anyway. Use fs.createReadStream for anything that might be large, and process it chunk by chunk.
3. Checking existence with fs.access before reading.
This creates a TOCTOU race: the file can be deleted between the check and the read. Always attempt the operation and catch ENOENT (or whichever error code you care about) in the error handler. The exception IS the existence check.
4. Forgetting to close watchers and streams.
fs.watch holds an OS-level file descriptor. Streams hold buffers and underlying handles. If you create them dynamically and never call .close() or let them end, you leak resources. Use pipeline() for streams (it cleans up on error) and close watchers in your shutdown handler.
5. Ignoring error codes.
err.message is for humans. err.code is for programs. Always branch on codes like ENOENT (not found), EACCES (permission denied), EISDIR (path is a directory), EEXIST (already exists). Logging just the message and giving up loses critical information about what to do next.
Interview Questions
1. "What are the four ways to read a file in Node.js, and when would you use each?"
The four styles are synchronous (fs.readFileSync), callback-based async (fs.readFile), promise-based (fs.promises.readFile), and streams (fs.createReadStream). Sync is acceptable only at startup, in CLI tools, and in build scripts — anywhere there's no event loop serving concurrent requests. Callback async is the original style and still works, but it leads to nested "callback hell" for multi-step operations and is rarely chosen for new code. Promise-based via fs.promises is the modern default for any async file work inside the event loop — it composes with async/await, supports try/catch, and works with Promise.all for parallelism. Streams are the right choice when files might be larger than RAM or when you need to pipe data through transforms like compression — they keep memory usage constant regardless of file size.
2. "Why is it dangerous to use fs.readFileSync inside an HTTP request handler?"
Node.js runs JavaScript on a single thread driven by the event loop. fs.readFileSync blocks that thread until the disk operation completes, which means every concurrent request waits — not just the one that called it. A 100 ms sync read inside one handler adds 100 ms of latency to every other request being processed at the same time. With 200 requests per second, the server effectively serializes them and falls over. The async equivalents (fs.readFile, fs.promises.readFile, streams) hand the work to libuv's thread pool and let the event loop continue serving other requests while the disk works in the background.
3. "What is the difference between fs.readFile and fs.createReadStream? When would you choose the stream?"
fs.readFile loads the entire file into memory as a single Buffer (or string with encoding) before calling your callback or resolving the promise. fs.createReadStream opens the file and emits data in chunks — typically 64 KB at a time — through a Readable stream. Choose the stream when the file might be large (more than a few MB), when you're piping the data somewhere else like an HTTP response or another file, when you want to start processing before the full file is loaded, or when you need backpressure handling so a slow consumer doesn't overwhelm a fast producer. The headline win is constant memory: a 10 GB file streams with the same RAM footprint as a 10 KB file.
4. "What's the right way to check if a file exists in Node.js?"
Don't pre-check — just attempt the operation and handle the error. Calling fs.access or fs.stat first creates a TOCTOU (time-of-check-to-time-of-use) race condition where the file can be deleted, created, or have its permissions changed between the check and the actual operation. Instead, call fs.readFile (or whatever you intend to do) inside a try/catch and check err.code === 'ENOENT' in the catch. This is atomic, race-free, and faster because it makes one syscall instead of two. The only legitimate use of fs.access is when you genuinely need to know permissions in advance and there's no operation you intend to perform.
5. "How does fs.watch work, and why do most production projects use chokidar instead?"
fs.watch uses the operating system's native file change notification API — inotify on Linux, FSEvents on macOS, ReadDirectoryChangesW on Windows. It returns a Watcher object that emits change and rename events. The problem is that these native APIs behave differently on each platform: macOS may emit one event while Linux emits two, Windows may not detect renames at all in some configurations, and atomic-save editors like VS Code (which write to a temp file and rename) often invalidate the watcher entirely. chokidar wraps fs.watch (and falls back to polling where needed), normalizes events across platforms, debounces editor save bursts, supports glob patterns, and handles symlinks and recursive watching reliably. Almost every production tool that watches files — webpack, vite, nodemon — uses chokidar under the hood for these reasons.
Quick Reference — fs Cheat Sheet
+---------------------------------------------------------------+
| FS API CHEAT SHEET |
+---------------------------------------------------------------+
| |
| SYNC (startup only): |
| const data = fs.readFileSync('x.json', 'utf8') |
| fs.writeFileSync('out.txt', data) |
| |
| CALLBACK (legacy): |
| fs.readFile('x.json', 'utf8', (err, data) => {...}) |
| |
| PROMISE (modern default): |
| const fs = require('node:fs/promises') |
| const data = await fs.readFile('x.json', 'utf8') |
| await fs.writeFile('out.txt', data) |
| |
| STREAM (big files / pipes): |
| const r = fs.createReadStream('big.log') |
| const w = fs.createWriteStream('big.log.gz') |
| await pipeline(r, zlib.createGzip(), w) |
| |
| DIRECTORIES: |
| await fs.mkdir('a/b/c', { recursive: true }) |
| await fs.readdir('.', { recursive: true, |
| withFileTypes: true }) |
| |
| STATS & PERMISSIONS: |
| const s = await fs.stat('x.txt') |
| s.size s.mtime s.isFile() s.isDirectory() |
| await fs.chmod('script.sh', 0o755) |
| |
| WATCHING: |
| fs.watch('x.json', (event, name) => {...}) |
| // Production: use chokidar instead |
| |
+---------------------------------------------------------------+
+---------------------------------------------------------------+
| ERROR CODES TO KNOW |
+---------------------------------------------------------------+
| |
| ENOENT - File or directory does not exist |
| EACCES - Permission denied |
| EEXIST - File already exists (mkdir without recursive) |
| EISDIR - Expected a file, got a directory |
| ENOTDIR - Expected a directory, got a file |
| EMFILE - Too many open files (close your handles!) |
| |
+---------------------------------------------------------------+
| Style | Blocks Event Loop? | Memory | Best For | Avoid In |
|---|---|---|---|---|
readFileSync | Yes | Whole file in RAM | Startup, CLI tools, build scripts | HTTP handlers, request paths |
readFile (callback) | No | Whole file in RAM | Legacy code, simple one-shot reads | New code (use promises) |
fs.promises.readFile | No | Whole file in RAM | Modern async code, small/medium files | Files larger than ~100 MB |
createReadStream | No | Constant (chunk size) | Big files, pipes, transforms | Tiny files where overhead > benefit |
Prev: Lesson 2.3 -- Module Resolution Next: Lesson 3.2 -- path and os Modules
This is Lesson 3.1 of the Node.js Interview Prep Course -- 10 chapters, 42 lessons.