Optimising ClamAV scan performance in Node.js
A fresh ClamAV scan on a modern machine takes 100–500 ms for a typical document. For high-upload-volume applications, that latency adds up quickly. Understanding where the time goes points to the right optimisation.
| Phase | CLI mode (clamscan) |
TCP mode (clamd) |
|---|---|---|
| Process / connection startup | ~150–300 ms — loads the full database into memory on every call | ~1–5 ms — clamd is already running with DB in memory |
| File I/O | Reads file from disk | File streamed over TCP — adds minimal overhead for local clamd |
| Signature matching | Same engine, same speed | Same engine, same speed |
The single biggest win for production workloads is switching from CLI mode to TCP mode (clamd).
CLI mode vs TCP mode
pompelmi defaults to CLI mode: it spawns a new clamscan process
for every scan() call. Each new process loads the ClamAV virus
database (~300 MB) from disk into memory — this alone takes 150–300 ms on
most hardware.
In TCP mode, a clamd daemon runs persistently
with the database already loaded. Each scan() call opens a
socket, streams the file, and reads the result. Connection overhead is a few
milliseconds — database load time is zero.
To use TCP mode, pass host and port to
scan():
const { scan, Verdict } = require('pompelmi');
// TCP mode — clamd must be running and listening on this host:port
const verdict = await scan('/tmp/upload.pdf', {
host: process.env.CLAMD_HOST || '127.0.0.1',
port: Number(process.env.CLAMD_PORT) || 3310,
timeout: 15_000,
});
For production, always use TCP mode. CLI mode is convenient for development and scripting but not suitable for upload-heavy services.
Running clamd locally
The quickest way to run clamd locally is with Docker:
docker run -d \ --name clamav \ -p 3310:3310 \ clamav/clamav:stable
Wait for clamd to be ready (it runs freshclam on first boot — takes 1–3 minutes), then verify it is accepting connections:
nc -zv 127.0.0.1 3310 # Connection to 127.0.0.1 3310 port [tcp/*] succeeded!
On Linux without Docker, install the clamav-daemon package:
sudo apt-get install -y clamav-daemon sudo systemctl enable --now clamav-daemon
Concurrency
Each scan() call is independent and can run concurrently.
Clamd handles multiple simultaneous connections — the default
MaxThreads in clamd.conf is 10.
For bulk uploads or batch processing, scan concurrently with
Promise.all. For very high volumes, use p-limit
to cap concurrency at your clamd thread count:
npm install p-limit
const pLimit = require('p-limit');
const { scan, Verdict } = require('pompelmi');
const CONCURRENCY = 8; // stay under MaxThreads in clamd.conf
const limit = pLimit(CONCURRENCY);
async function scanBatch(filePaths) {
return Promise.all(
filePaths.map((p) =>
limit(() => scan(p, { host: '127.0.0.1', port: 3310 }))
)
);
}
MaxThreads setting in /etc/clamav/clamd.conf
(default: 10) and set your concurrency limit to 80–90% of that value. This
leaves headroom for clamd's internal housekeeping threads.
Timeout tuning
pompelmi's TCP mode accepts a timeout option (milliseconds of
socket inactivity before the call rejects). The default is 15,000 ms.
Set the timeout based on your largest expected file size. A rough rule:
| Max file size | Recommended timeout |
|---|---|
| 5 MB | 5,000 ms |
| 20 MB | 10,000 ms |
| 50 MB | 20,000 ms |
| 100 MB+ | 30,000 ms or more |
A tight timeout also acts as a circuit breaker: if clamd becomes unresponsive due to overload or a hanging scan, requests will fail quickly rather than blocking indefinitely.
const verdict = await scan(filePath, {
host: '127.0.0.1',
port: 3310,
timeout: 10_000, // 10 s — appropriate for files up to ~20 MB
});
Enforce file size limits before scanning
Every megabyte you scan costs time. Rejecting oversized files before calling
pompelmi keeps latency predictable and protects against decompression
attempts on large archives. Use Multer's limits.fileSize to
enforce the limit before the file even reaches your handler:
const upload = multer({
dest: os.tmpdir(),
limits: { fileSize: 20 * 1024 * 1024 }, // reject above 20 MB immediately
});
When Multer rejects an oversized file, it throws a
MulterError with code: 'LIMIT_FILE_SIZE'. Handle
it in your error middleware:
app.use((err, req, res, next) => {
if (err.code === 'LIMIT_FILE_SIZE') {
return res.status(413).json({ error: 'File too large.' });
}
next(err);
});
Quick benchmark
Run this script to compare CLI vs TCP scan times on your own hardware:
// bench.js
const { scan } = require('pompelmi');
const { writeFileSync } = require('fs');
const { tmpdir } = require('os');
const { join } = require('path');
const FILE = join(tmpdir(), 'bench-1mb.bin');
// Create a 1 MB test file of zeroes
writeFileSync(FILE, Buffer.alloc(1024 * 1024));
async function bench(label, options, runs = 10) {
const times = [];
for (let i = 0; i < runs; i++) {
const t0 = Date.now();
await scan(FILE, options);
times.push(Date.now() - t0);
}
const avg = Math.round(times.reduce((a, b) => a + b, 0) / runs);
console.log(`${label}: avg ${avg} ms over ${runs} runs`);
}
(async () => {
console.log('Warming up clamd…');
await scan(FILE, { host: '127.0.0.1', port: 3310 });
await bench('CLI mode', {});
await bench('TCP mode', { host: '127.0.0.1', port: 3310 });
})();
Expected results on a typical development machine:
| Mode | Average (1 MB file) |
|---|---|
| CLI (clamscan) | 250–400 ms |
| TCP (clamd) | 20–60 ms |
Next steps
- Setting up clamd in Docker Compose or Kubernetes? See Docker Compose or Kubernetes.
- Scanning multiple files concurrently? See Scanning multiple file uploads in a single request.
- Need async scanning to avoid blocking HTTP responses? See Background virus scanning with BullMQ.