preload
Sep 11

It has been a while since last post 🙂 The site is running on AWS nowadays and i wanted to test out my nginx and php-fpm setup on CentOS7. I did not want to install virtual machine from AMI image and reconfigure the server all over again. This is where docker came handy. Here are some features I noticed during the test.

So the target was to build container with PHP5.4 without affect to the actual site. For this, docker has nice feature which allows you to map directory from host machine to the container.

Here is the Dockerfile to build CentOS7 image with my comments.

FROM centos:centos7
MAINTAINER J.Berg contact@mceith.com
RUN rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
RUN yum install epel-release -y
RUN yum update -y && yum install nginx php-fpm php-mysql -y
RUN mkdir /var/wwwlogs
# Copy the original settigs. Note that the files must be inside the build directory.
COPY nginx.conf /etc/nginx/nginx.conf
COPY sites-enabled /etc/nginx/sites-enabled
COPY sites-available /etc/nginx/sites-available
COPY conf.d /etc/nginx/conf.d
ADD run.sh /run.sh
# Do not start nginx as daemon.
RUN sed -i '1 i\daemon off;' /etc/nginx/nginx.conf
RUN sed -ie 's/apache/mceith/g' /etc/php-fpm.d/www.conf
# Match user id with running system for php-fpm.
RUN groupadd -g 501 mceith && useradd -M -u 501 -g 501 mceith -s /sbin/nologin
EXPOSE 80
ENTRYPOINT /run.sh

Since container does not have systemd or whatever to handle running processes, we need to make script which starts them for us:

run.sh

#!/bin/bash
/usr/sbin/php-fpm -D && /usr/sbin/nginx

Build the image:


# docker build -t nginx_test .

And run it on port 8080:


# docker run -t -i -d -p 8080:80 -v /var/www/mceith/public_html:/var/www/mceith/public_html -v /var/wwwlogs:/var/wwwlogs nginx_test

Site runs parallel with CentOS6/PHP5.3 on port 8080. Seems to work with CentOS7/PHP5.4 also! 🙂

Tagged with:
Dec 28

One issue with zfs send over network is that the documentation available from Oracle only shows examples with ssh. This is not the best case if one is dealing with large datasets. It is very slow over and does not applicable to 10 Gigabit networks.

I have noticed that mbuffer is very capable to deliver high speeds on zfs send. It works client based and needs mbuffer client on sending and receiving end.

Set up receiver to accept connection from senders ip-address 192.168.1.100:

# mbuffer -s 128k -m 1G -I 192.168.1.100:50001 | zfs receive tank/backup/data

mbuffer listens on port 50001 and uses 1 Gigabyte of memory to buffer incoming data. Blocksize is set to 128k which is default on ZFS.

Send the data to client on address 192.168.1.10:

# zfs send tank/nfs/data@today | mbuffer -s 128k -m 1G -O 192.168.1.10:50001

One should see significant increase of speed to zfs send. If the dataset is busy with bursty writes, mbuffer will hold it on its own buffer on both ends.

Remember that datastream is not crypted and this can be issue on some cases. So use dedicated networks for sending snapshots with mbuffer.

mbuffer can be found here

Nov 03

I upgraded XenServer 5.5.0 with latest upgrade from Citrix and rebooted the host server. This lead to infinite boot loop and i rebooted the server without quiet and splash options to discover that the server did not see the H200 raid card anymore and could not mount the root filesystem.

Citrix has fix for this issue to mpt2sas driver at: http://support.citrix.com/article/CTX130763

I installed the fix only to see that it did not help at all.

I did fallback to previous kernel from grub menu ( type menu.c32 to grub prompt ) and looked the Citrix provided rpm package:


# rpm -qpl /mnt/tmp/mpt2sas-modules-xen-2.6.18-128.1.6.el5.xs5.5.0.513.1041-02.00.00.00-1.i386.rpm
/lib/modules/2.6.18-128.1.6.el5.xs5.5.0.513.1041xen/extra/mpt2sas.ko

It seems that the rpm includes pre builded module, but XenServer is basically RHEL based linux distribution so i assumed that the original fix which upgraded the servers kernel also made the ramdisk with wrong version of the module. Querying the rpm confirmed my assumption:


# rpm -qp --scripts /mnt/tmp/mpt2sas-modules-xen-2.6.18-128.1.6.el5.xs5.5.0.513.1041-02.00.00.00-1.i386.rpm
postinstall scriptlet (using /bin/sh):
depmod 2.6.18-128.1.6.el5.xs5.5.0.513.1041xen

Indeed there is not script for creation for ramdisk. So I made it manually:


# mkinitrd -v -f --with=mpt2sas /boot/initrd-2.6.18-128.1.6.el5.xs5.5.0.513.1041xen.img 2.6.18-128.1.6.el5.xs5.5.0.513.1041xen

And the server booted normally up.