Scanning files before uploading to MinIO (self-hosted S3)

MinIO is a high-performance, self-hosted object storage server that is API-compatible with Amazon S3. It is the standard choice for on-premises deployments, air-gapped environments, development setups, and any project that needs S3-compatible storage without cloud dependencies.

Because MinIO speaks the S3 API, you can use the official AWS SDK v3 to talk to it by pointing the client at your MinIO endpoint instead of AWS. The pompelmi scan-then-upload pattern is identical to the S3 guide.

New to pompelmi? Read Getting started with antivirus scanning in Node.js first, then return here for the MinIO-specific setup.

Run MinIO locally for development

The fastest way to get a local MinIO instance running is with Docker:

docker run -d \
  -p 9000:9000 \
  -p 9001:9001 \
  --name minio \
  -e MINIO_ROOT_USER=minioadmin \
  -e MINIO_ROOT_PASSWORD=minioadmin \
  quay.io/minio/minio server /data --console-address ":9001"

Open http://localhost:9001 to access the MinIO console. Create a bucket named uploads and generate an API access key/secret pair under Access Keys.

Install

npm install pompelmi @aws-sdk/client-s3 multer express

Configure the S3 client for MinIO

Point the AWS SDK v3 at your MinIO endpoint. The key difference from S3 is forcePathStyle: true — MinIO uses path-style URLs (http://minio:9000/bucket/key), while AWS uses virtual-hosted style (https://bucket.s3.amazonaws.com/key):

const { S3Client } = require('@aws-sdk/client-s3');

const minioClient = new S3Client({
  region:          'us-east-1',   // MinIO ignores this, but the SDK requires a value
  endpoint:        process.env.MINIO_ENDPOINT || 'http://localhost:9000',
  forcePathStyle:  true,          // required for MinIO
  credentials: {
    accessKeyId:     process.env.MINIO_ACCESS_KEY,
    secretAccessKey: process.env.MINIO_SECRET_KEY,
  },
});
For production MinIO deployments behind a reverse proxy with TLS, set endpoint to your HTTPS URL (e.g. https://minio.internal) and keep forcePathStyle: true.

Complete Express example

const express = require('express');
const multer  = require('multer');
const { scan, Verdict } = require('pompelmi');
const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');
const crypto = require('crypto');
const path   = require('path');
const fs     = require('fs');
const os     = require('os');

const app    = express();
const upload = multer({
  dest:   os.tmpdir(),
  limits: { fileSize: 50 * 1024 * 1024 },
});

const minioClient = new S3Client({
  region:         'us-east-1',
  endpoint:       process.env.MINIO_ENDPOINT || 'http://localhost:9000',
  forcePathStyle: true,
  credentials: {
    accessKeyId:     process.env.MINIO_ACCESS_KEY     || 'minioadmin',
    secretAccessKey: process.env.MINIO_SECRET_KEY     || 'minioadmin',
  },
});

const BUCKET = process.env.MINIO_BUCKET || 'uploads';

app.post('/upload', upload.single('file'), async (req, res) => {
  if (!req.file) {
    return res.status(400).json({ error: 'No file provided.' });
  }

  const tmpPath   = req.file.path;
  let tmpDeleted  = false;

  try {
    // Step 1 — scan locally before touching MinIO
    const verdict = await scan(tmpPath);

    if (verdict === Verdict.Malicious) {
      return res.status(400).json({ error: 'Malware detected. Upload rejected.' });
    }
    if (verdict === Verdict.ScanError) {
      return res.status(422).json({ error: 'Scan incomplete. Upload rejected.' });
    }

    // Step 2 — file is clean, upload to MinIO
    const ext    = path.extname(req.file.originalname).toLowerCase();
    const key    = 'uploads/' + crypto.randomBytes(16).toString('hex') + ext;

    await minioClient.send(new PutObjectCommand({
      Bucket:      BUCKET,
      Key:         key,
      Body:        fs.createReadStream(tmpPath),
      ContentType: req.file.mimetype,
      Metadata: {
        'scan-status': 'clean',
        'scanned-by':  'pompelmi',
        'scanned-at':  new Date().toISOString(),
      },
    }));

    fs.unlinkSync(tmpPath);
    tmpDeleted = true;

    return res.json({
      status: 'ok',
      key,
      url: `${process.env.MINIO_ENDPOINT || 'http://localhost:9000'}/${BUCKET}/${key}`,
    });

  } catch (err) {
    return res.status(500).json({ error: err.message });

  } finally {
    if (!tmpDeleted && fs.existsSync(tmpPath)) {
      fs.unlinkSync(tmpPath);
    }
  }
});

app.listen(3000, () => console.log('Listening on :3000'));

MinIO bucket policy

By default MinIO buckets are private. If you want files to be publicly readable (e.g. a public CDN), apply a read-only policy using the MinIO client or console. For private uploads, keep the default and generate pre-signed URLs for download:

const { GetObjectCommand } = require('@aws-sdk/client-s3');
const { getSignedUrl }    = require('@aws-sdk/s3-request-presigner');

async function getDownloadUrl(key, expiresInSeconds = 3600) {
  return getSignedUrl(
    minioClient,
    new GetObjectCommand({ Bucket: BUCKET, Key: key }),
    { expiresIn: expiresInSeconds }
  );
}
@aws-sdk/s3-request-presigner works with MinIO without any changes — install it alongside @aws-sdk/client-s3.

Next steps