If you are running your services on a low end virtual hosting every byte of memory you can save is important. The memory is often the limiting factor of how many applications you can run on VPS: CPUs are shared, memory not, on the same physical host.
- Low-end VPS come with 512 MB memory or less
- Front front-end server Apache / Nginx / Varnish takes > 100 MB + min. 20 MB for each child process
- Memecached takes its toll
- MySQL takes 200 – 400 MB
- Each Python / PHP process takes at least 15 MB and you need parallel processes for paraller HTTP requests (FCGI, pre-fork, others… )
- Operating system processes need some memory (SSH, cron, sendmail)
As you can see it gets very crowded in 512 MB.
It’s especially troublesome since the memory is allocated lazily and the memory usage builds up slowly. In some point caches are no longer caches, but swapped to a disk – virtual memory usage grows beyond available RAM. To keep the server response, everything time critical should fit to RAM once and if the processes themselves don’t know how to release memory in this situation you need to tune a memory cap for them.
MySQL memory consumption
MySQL can be a greedy bastard what comes to memory consumption. Here on this server MySQL seems to take 417M virtual memory which seems to be little excessive for just running two WordPress instances and one Django / Python application:
1310 mysql 20 0 417M 21100 2776 S 0.0 1.2 0:00.00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/v
After some tuning I was able to bring it down a bit
3354 mysql 20 0 276m 19m 2848 S 0 1.2 3:41.19 mysqld
A reduction of 130 MB, or 1/4 of the server total memory. Not bad.
Use mtop to monitor running MySQL, its querieries, etc. so you know what’s going on. As you can see this MySQL has very good cache rate meaning that basically it is keeping everything in memory. If the content of the sites is less than 10 MBytes total, 400 MB contains plenty of space to cache the content:
load average: 0.05, 0.08, 0.16 mysqld 5.0.51a-3ubuntu5.8-log up 1 day(s), 19:47 hrs 2 threads: 1 running, 6 cached. Queries/slow: 187.1K/0 Cache Hit: 99.39%
What eats memory
I am not an expert on MySQL, so I hope someone with more insight could post comments regarding how to tune MySQL for low memory situations and how it is expected to behave.
Some ideas I run through my head
- MySQL default cache settings are not too tight on Ubuntu/Debian, making it suitable for moderate loads, not low loads. If you don’t have much content, everything is just kept in memory (even if not needed)
- MySQL uses round robin for connections and if there is 100 max connections it will allocate a thread stack for each connection (someone please confirm this – I found contracting infos).
Configuring MySQL
Here are listed some methods how to reduce the memory usage. This is what I done on this little box
MySQL is mostly configured in /etc/mysql/my.cnf on Ubuntu / Debian.
- Let’s halve key_buffer from 16M to 8M. It is used by MyISAM table cache.
- Halve query_cache_size. query_cache_size = 8M. Also, decrease query_cache_limit to 512 K.
- Each connection, even if idle, will have 256 KB buffer. Decrease the number of max. connections. (XXX: not sure about this). Drop max connections from 100 -> 30, as we do not have that many concurrent visitors on the site. Also, set less aggressive thread_stack size.
The final adjustments
key_buffer = 8M max_connections = 30 query_cache_size = 8M query_cache_limit = 512K thread_stack = 128K◀ ▶
More info
- Fine-tuning MySQL
- Showing what database engine MySQL tables are using
- MySQL server memory usage
- Reducing MySQL memory usage for low end boxes
Send in more tips please! Is 32-bit better than 64-bit for low end VPS, how much this affects MySQL?
Get developers
Subscribe mFabrik blog in a reader
Follow me on Twitter
Hi Mikko,
The problem you are facing is not ‘mysql memory usage’ problem, that’s OpenVZ memory management issue.
In your example:
1310 mysql 20 0 417M 21100 2776 S 0.0 1.2 0:00.00 /usr/sbin/mysqld
mysql uses 21M RSS, not 417M. 417M is VIRT memory, it given for free on almost all systems (on real servers, on xen or kvm vps’es) so software usually can allocate as much VIRT memory as it wants. Moreover, OS itself consider VIRT free to allocate and allocates about 10M per thread for thread stack in VIRT memory. But this doesn’t work in OpenVZ because VIRT is limited here, not RSS.
That’s why many-many people consider apache or mysql InnoDB memory hog. These programs are using threads and on cheap OpenVZ VPS’es they are eating huge amount of memory indeed.
There are several possible ways to decrease memory usage on OpenVZ.
1. Use single-thereaded software (e.g. replace apache with nginx, use myisam instead of innodb)
2. Decrease thread stack using software config options (e.g. ThreadStackSize option for apache)
3. Decrease thread stack size globally. This can be done with ‘ulimit -s ‘ command. Try e.g. typing ‘ulimit -s 2048′, then restart mysql and check memory usage. If all is working fine, this command can be added to booting sequence (e.g. to /etc/init.d/rc so it run before starting mysql and apache).
But the best way is to avoid OpenVZ and use XEN/KVM. “512M OpenVZ” is a pure marketing, it is worse than 256M XEN because that’s just different memory. Look, your mysql will use 400M+ on OpenVZ and less than 25M on XEN. Get something like linode and your memory problems will disappear.
There must be some sort of DB-connection pooling (app?), for Django’s ORM? Then you can have a tunable, fixed # of DB connections, released to the pool when a request is completed…
I would think there’s a PHP DB-connection pooling “app”, too.
Do you really want to run a Varnish caching proxy, and “memcached”, in that relatively small VPS? Perhaps there’s a way to use a Nginx plugin, and Django caching, to take over from these? (not sure).
Thank you for your insightful reply Mikhai. However, I am not using OpenVZ and I am using XEN.
If you are using XEN then memory consumption shouldn’t be measured with VIRT column
In your case Mysql consumes 21M before changes and 19M after, not 417M and 276M.
VIRT still goes to swap and means that it is put on the disk in some point, right?
No, VIRT does not goes to swap and in most cases it is never put on disk. Swapped memory accounts in VIRT as well as stacks, mmaped files, etc. Take a look at http://linux-mm.org/ActualMemoryFootprint for more info. Gigabytes of VIRT can be allocated without negative impact on system.
However, tn this particular case we can observe swap space growth, so something is going there and memory usage must be reduced
Yep, but in order to reduce memory it’s good to know what is exactly consuming memory. And in order to know this take a look at RSS column, not at VIRT column (on XEN).
Increasing VIRT can be a side effect of swapping, but it is definitely not a cause of swapping.
Any tools for getting better data, with historic profile, how much Linux processes hold memory in RES and how much is swapped out?