cacert-sysadm AT lists.cacert.org
Subject: CAcert System Admins discussion list
List archive
- From: Daniel Black <daniel AT cacert.org>
- To: cacert-sysadm AT lists.cacert.org
- Subject: infrastructure project planning -tech
- Date: Fri, 31 Jul 2009 13:25:29 +1000
- Authentication-results: lists.cacert.org; dkim=pass (1024-bit key) header.i= AT cacert.org; dkim-asp=none
- Organization: CAcert
On Friday 31 July 2009 11:30:18 Ian G wrote:
> Let's get serious. Let's get professional!
> Here's a scratch plan to move on:
>
thanks Ian.
(from https://lists.cacert.org/wws/arc/cacert-board/2009-07/msg00443.html)
> 2. *Tech*
> It looks to me that the idologic deal falls a bit short of what the
> infrastructure team were hoping for. So the tech team needs to get a
> better view of what they need
>
> * http://wiki.cacert.org/wiki/SystemAdministration/InfrastructureHost
> * how many VMs ... versus how much grunt
> * tech numbers to make it worthwhile: IPs, Memory, disk, etc.
> * do we split up the VMs across how many servers?
> * reset regime.
> * install regime
> * have we standardised on a single OS / distro? Debian?
>
> It seems that this task falls to the tech teams: Daniel with help from
> Christopher and the others who have chimed in with comments?
My thoughts:
> * how many VMs ...
I'm trying to keep once service to one VM. System overhead for each VM isn't
too much.
limits come into play with (public) IPs (later)
> ... versus how much grunt
Current usage:
sun2: vserver-stat
CTX PROC VSZ RSS userTIME sysTIME UPTIME NAME
0 67 119.7M 62.6M 3d02h34 1d06h46 69d12h10 root server
50271 96 1G 377.5M 2h02m55 1h01m32 46d16h28 email
50272 20 339.7M 90.2M 1m28s11 1m22s22 46d16h14 webmail
50276 28 671.5M 265M 1h16m33 6h53m06 46d16h06 lists
50277 18 816.5M 99.2M 14h31m32 1h37m37 46d16h03 wiki
50281 17 316.6M 68.1M 0m26s94 0m35s11 46d15h55 dupes(hashserver)
50282 14 62.3M 18.1M 0m29s90 2m40s37 46d15h55 crl
50283 16 371.6M 71.7M 8m03s00 5m17s93 46d15h49 irc
50284 21 460.8M 182.5M 14m02s95 5m02s34 46d15h48 translingo
50285 20 375.8M 111.7M 19m08s90 6m14s50 46d15h47 cats
50286 7 550.4M 81.6M 2m36s42 0m24s20 46d15h47 svn
50287 19 497.5M 109.2M 1h07m46 3m53s74 46d15h47 bugs
50288 9 48.4M 14.4M 0m10s00 0m12s62 46d15h46 www
50533 20 589.7M 194.1M 14h13m52 40m09s45 41d14h21 blog
51101 11 956.8M 332.7M 56m12s75 6m46s83 29d19h45 issue
51662 12 273.3M 61.2M 0m10s26 0m23s87 18d03h42 test2
52533 4 72.3M 22M 0m00s47 0m00s14 9m52s22 ocsp
the root server is CPU intensive due to I assume a lot of backups (every 10
minutes) with a lot of compression/duplicate eliminate.
The rest isn't particularly CPU intensive.
memory is a sightly bigger issue running so many services. Current usage is
~2.5G out of 4G.
> * tech numbers to make it worthwhile: IPs, Memory, disk, etc.
IPs:
I quite like the internal, external IP split arrangement currently. I helps
when deploying community.cacert.org for instance where webmail is on webmail,
and pop/imap/managesieve is on email.
The SSL services are currently the determining factor on IP addresses. As
mentioned Apache SNI is now available however it needs a little testing first
and I think having client certificates everywhere will be a good test for it.
We'll probably end up with an Apache SNI proxy machine.
Memory:
I think all para-virtualisation technologies share the hostOS memory and are
quite efficient at this. We just need to make sure the total is good.
Disk:
was planning a LVM group with splits for each vm. its scalable reasonably and
for good and bad placed disk space limits on VMs.
Any good ideas for redundancy?
Alternate is a big space with vserver chroot separation. I was considering
gluster to backup disk realtime to another neighbouring server.
> * do we split up the VMs across how many servers?
once we reach limits we'll have to.
> * reset regime.
control via host OS or failing that provider interfaces (phone/web/support).
> * install regime
plan on deploying puppet for this project and letting it handle standard
aspects of all servers (backup/syslog/monitoring/ssh
hardening/accounts(?)/security updates)
> * have we standardised on a single OS / distro? Debian?
Debian is the current flavour because of limited staff and its commonly
understood. Once puppet gets going perhaps some more variety can be handled
without too much pain. Interested to know people's experience here.
Other bits:
bandwidth - volume - see previous breakdown
bandwidth burst - how to specify requirements here?
reverse DNS?
--
Daniel Black
Infrastructure Administrator
CAcert
- Re: infrastructure project, (continued)
- Re: infrastructure project, Mario Lipinski, 07/29/2009
- Message not available
- Message not available
- Message not available
- Message not available
- Message not available
- Message not available
- Message not available
- Message not available
- Message not available
- Re: infrastructure project, Ian G, 07/29/2009
- Message not available
- Message not available
- Message not available
- Message not available
- Message not available
- Message not available
- Message not available
- Message not available
- Message not available
- Message not available
- Message not available
- Message not available
- Re: infrastructure project, Mario Lipinski, 07/29/2009
- Re: infrastructure project - requirements, Daniel Black, 07/29/2009
- Message not available
- Message not available
- Re: infrastructure project, Daniel Black, 07/29/2009
- Re: infrastructure project, Markus Warg, 07/29/2009
- Re: infrastructure project, Ian G, 07/29/2009
- Message not available
- infrastructure project planning -tech, Daniel Black, 07/31/2009
- Message not available
- Re: infrastructure project, Ian G, 07/29/2009
Archive powered by MHonArc 2.6.16.