chris123 at magma.ca
Fri Oct 25 21:26:59 EDT 2002
On Friday 25 October 2002 19:54, Timothy Brier wrote:
> There will be alot of queries / updates / deletes running. The current
> SQL database size is 2 GB, I don't know what this will translate to in
> Postgres. We will have as many as 700 clients connected at once.
> The number of records accessed by each user could be as high as 60-70,000
> per transaction.
> The configuration will be one Web Server and one Postgres server.
Your biggest issues with that level of transactions is some form or rapid
build in redundancy/recovery strategy when an if your systems go down.
Typically hardware is not the issue as its dirt cheap, typcially its the
systems design that is the issue.
So how you gonna recover if someone pulls the plug on the data base server by
mistake...and I am seriouse too for a change..:)
Consider files systems used, ext2 vs ext3 vs reiser vs whatever It does make a
big difference and not only from a perspective of journalled or not.
Next backup and redundancy. DB server goes down..,how do you recover. Web
server gets cracked...how do you recover. The issue here with 700 users is
time. You want the transition from backups to staging server to production
server to be as smooth and timeless as possible.
Other things to consider, power supply backups, level/no's of admins, solid
and recoverable backup strategy, firewalling?, at what levels do data and
hardware reduncy become supperfluose?
Consider all elements not just the hardware components in isolation and you
will sleep comfortably..:)
More information about the OCLUG