MongoMem: Memory usage by collection in MongoDB
Here at Wish, we’re big fans of MongoDB. It powers our site for 8 million users and has been a pretty good experience for us. To help keep everything running smoothly, we’ve built a handful of tools to help automate things and get more insights into what’s going on.
Today, we’re releasing the first of these tools, MongoMem. MongoMem solves the age-old problem of figuring out how much memory each collection is using. In MongoDB, keeping your working set in memory is pretty important for most apps. The problem is, there’s not really a way to get visibility into the working set or what’s in memory beyond looking at resident set size or page faults rate.
As engineers, we usually have a rough, intuitive sense of how the memory distribution breaks down by collection. But, without a good way to validate those assumptions, we found it was easy to look in the wrong places for problems. In our early days, we kept using a lot more memory than we thought we should be, but we were running blind when we tried to decide where the low-hanging fruit was to optimize. After plenty of frustrating optimizations that didn’t make much difference, we decided that we really needed better information, and MongoMem was born.
You can find MongoMem on GitHub.
- Download the code from GitHub
pip install -r pip-requirements(it’s just
pymongo; any version of either is probably fine)
sudo python setup.py install(makes
/usr/local/bin, so you’ll probably need
If you run into any troubles here, leave a comment or ping me at email@example.com. I’ve only tried installing this on a couple machines here, so there could be problems I missed.
MongoMem is pretty simple to use. You have to run it on the same server as your
mongod since it needs to be able to read the mongo data files directly (so you may need to run it as root or your mongodb user, depending on how your permissions are setup). It’s safe to run against a live production site (just makes a few cheap syscalls, doesn’t actually touch data).
With that out of the way, usage is:
mongomem --dbpath DBPATH [--num NUM] [--directoryperdb] [--connection CONN]
DBPATH: path to your mongo data files (
/var/lib/mongodb/is mongo’s default location for this).
NUM: show stats for the top N collections (by current memory usage)
--directoryperdbif you’re using that option to start
CONN: pymongo connection string (“localhost” is the default which should pretty much always work, unless you’re running a port other than 27017)
It’ll take up to a couple minutes to run depending on your data size then it’ll print a report of the top collections. Don’t worry if you see a few warnings about some lengths not being multiples of page size. Unless there are thousands of those warnings, it won’t really impact your results.
For each collection, it prints:
- Number of MB in memory
- Number of MB total
- Percentage of the collection that’s in memory
How it Works
In theory, the problem isn’t that hard. MongoDB uses mmapped files, so to figure out what data is in memory (in Linux, anyway), a
mincore call on each of the data files will tell you which pages are in cache. So, know what collection is in each page, you can easily count the number of pages in cache per collection. The only trick is figuring out what regions of the file map to what collections.
You can figure that out by parsing the namespace files or traversing the data structure inside of MongoDB, but both of those options are annoying if you want to stay in Python and not patch mongo itself. The
validate command will give you the extent map, but that’s horribly impactful (and will touch the whole collection anyway), so it wasn’t an option. Thanks to a tip from Eliot over at 10gen, though, it turns out the
collStats command has an undocumented option that give us exactly what we need. If you add the
verbose: true option to that command, it’ll give you the full extent map for a collection. Armed with that, you can crank through and get all the data.
One thing that I’d love to do with this but I haven’t spent enough time experimenting with is to pull these numbers continuously so I can plot them. I think it’d be really cool to see how these numbers change over time and also, within individual collections, how the memory usage changes. If you could measure the difference between consecutive snapshots with a sufficiently small period (possibly non-trivial since it takes around a minute or two to run on a large DB), you could get a plot of page faults by collection. Could be interesting to see where your faults are coming from (and how they change over time / in response to various events).
Another thing that I think would be cool is to break the data down further so we can see per-collection and per-index numbers (right now a collection in the tool counts as data + index). Sadly, there’s no command to get the extent map broken down by data and index, but if 10gen can add this feature, the tool could also give information about which indexes are memory hogs.
Just want to give a shout out to Eliot over at 10gen for pointing me to
verbose: true that saved me a lot of trouble to get the data I needed for this and David Stainton for the Python
mincore wrapper I needed to pull everything together.