Search This Blog

Friday, August 7, 2015

MongoDB and AWS Lambda

I haven't posted a blog in quite a while since I've been busy at work (new job) and outside of work.
I've been working on an app with my buddy and we decided to use PhoneGap to write it (well, I'm writing the backend and the JS scripts while he takes care of the GUI).

Anyway, I've also decided to use AWS to do the whole backend. I still have to have a simple EC2 instance to run some job on regular basis but otherwise, everything is handled on AWS.

I am taking advantage of the Lambda service because I can write code there instead of inside the app, which means that whenever something needs to be changed, the app doesn't need to be updated (or at least not as often).

Initially I was using Lambda to query my DynamoDB instance.
Since DynamoDB is quite expensive, in case it gets too pricey, I will switch to MongoDB (could have picked Cassandra but I like Mongo's name better...).

I wasn't sure as how to access Mongo from Lambda but I figured it out.

First, I created a Bitnami instance with MongoDB. That's pretty cool because it's available to AWS Free Tier and it's pre-installed so you don't have to do diddly squat or almost.

You need to configure Mongo so that it allows traffic other than

Edit the config file:

sudo vi /opt/bitnami/mongodb/mongodb.conf

Comment out that line:

#bind_ip =

You also need to allow the port with the local firewall:

sudo ufw allow 27017

Restart Mongo:

sudo /opt/bitnami/ restart mongodb

You're almost good to go. The last thing is to allow the port on the AWS console via security group.
Find out which security group is your EC2 instance using and add the 27017 port in the Inbound section.

Now, you can query Mongo from everywhere. Probably not the best security posture but eventually you can limit the access from a specific IP.

Before doing anything else, you need to install the Mongo driver via NodeJS onto your development machine:

npm install mongodb

You will have a new directory named "node_modules" should look like this (minus the index.js and .zip)

You can write your NodeJS script and create a zip file. The script has to be named index.js.
Here's a small sample:

var mongodb = require('mongodb');

console.log('Loading function');

exports.handler = function(event, context) {
    console.log("Connecting to Mongo");

    mongodb.MongoClient.connect('mongodb://', function(err, db) {
        console.log("Connected to mongo");
        var col = db.collection('Users');

        col.find({ userID: "415173559090" } ).toArray(function(err, docs) {
            if (err) throw err;
            docs.forEach(function(doc) {
            context.done(null, "finished");

Upload to Lambda, test, happiness!

Your Lambda function should have at least 512Mb of memory, and 1024 is better. The more memory, the faster it will connect to your mongo instance.

That little sample takes 1600ms with 1024Mb and 3800ms with 512Mb.


  1. I am working on something like this currently as well. One question though — 1600ms seems really slow to me. If you have a 1.6 seconds delay on just finding a single document, how does your application work?

    Have you considered to moving something like RESTHeart ( to expose MongoDB's interface via HTTP?

  2. Good point, the delay is smaller based on the amount of memory that you assign for the lambda function.

    For instance, with 256Mb, the function will take some 2s just to connect to Mongo but, with 1Gb it takes 4 times less. Obviously, your cost goes up as you increase the memory.

    My post is really about mixing and matching both technologies for those who are interested.

    Since then, I have created a simple EC2 instance with MongoDB and NodeJS on it, then I installed Restify and Mongodb (driver) for NodeJS and I can do the same thing that I did with Lambda but much, much faster (takes less than 100ms to query a few documents via a REST call).

  3. Okay, thanks for your reply. I came to the same conclusion. Maybe Lambda is working if you're dealing with DynamoDB, but like that it's not useful.

    We switched to the same architecture, just working with an EC2 instance.