briert at cepu.ca
Fri Nov 1 19:17:03 EST 2002
As a follow up to my initial question, we ran some tests at our company and
here is some of the details
I wrote a test to simulate our environment for which the question was
submitted. I had it reviewed by some skeptics who thought it was a "good
test" and egged me on about another MS victory.
The test basically simulates - in this case 300 users - except for the SQL
server it topped out at 160 users - all doing inserts, updates, deletions,
selects with back end calculations, functions and views.
Each client request involved 1.2 million records in 7 tables.
I did turn off fsync(), because the other database SQL Server has a delayed
writing feature as well. After running the test against each server 4
times - with the skeptics present and part of the test - the PostgreSQL
system outshined the SQL server handily. This is a summary of what
The client machines were all running Windows 2000. Some were dual
processors. The lowest client processor used was 700 MHZ PIII. All our
connected to a 100mb network, no other network activity was present and the
servers were not being used for another purpose. Each client averaged a
transfer of 800K per test.
The Web server is a 700 Mhz Celeron with 384K of Ram running Apache and
The SQL server machine is a dual 733MHZ PIII with 4 18megabytes scsi drives
@ 15K rpm on a wide scsi raid controller with a MAX throughput of 320mb per
channel with 4 channels, 1 GB of RAM and 2 NICS.
The PostgreSQL machine is a single 1.2ghz celeron with a standard IDE drive
(30 gigs) and 5 megabytes of ram.
The test software showed each client CPU(S) being maximized at 100 percent.
The databases contain identical data and all machines were shutdown and
restarted between tests to avoid any caching.
At 160 clients our SQL server max'd out on both CPU's at 80% - the system
was not very responsive to other processes. The PostgreSQL system would
reach 75% and spawn a new process as necessary. Overall it sat at 50%
usage. The system remained very responsive. No failures - the web page
couldn't be delivered - were reported by any clients for this test.
At 300 clients too many failure rates were occurring for the SQL server. We
ranked that test is incomplete. The PostgreSQL reached peaks of 96% usage
but as one skeptic said "It basically stayed at 80%'".
Management who saw the test are now questioning why we need SQL server, and
have asked if we can put Linux / PostgreSQL on our other servers. Not yet
As someone who has used SQL server for most of my backend development the
important thing to remember with PostgreSQL is use explain to test your
queries and look for any seq scans and put an index on that field. When you
create an index don't forget to use vacuum on the table to allow PostgreSQL
to use the index. PostgreSQL has most, but not all of the functionality we
use I'm not going to detail this right now, maybe later. PostgreSQL is a
very capable database, and should be strongly considered before a MS SQL
server solution, not for the sole purpose of saving money, but for better
This test is not meant to question the capabilities of other Databases on
Linux, it was an exercise of MS Server + SQL Server vs. RH 7.3 and
We also have considered using SYBASE on Linux, but these test results have
changed our mind.
Forgive any ramblings, it has been a long three weeks.
----- Original Message -----
From: "Chris Herrnberger" <chris123 at magma.ca>
To: <oclug at lists.oclug.on.ca>
Sent: Friday, November 01, 2002 11:39 AM
Subject: Re: [oclug]Postgresql tuning
On Friday 01 November 2002 11:17 am, Ian Wormsbecker wrote:
> I was just roaming around the net and found a document that may be useful
> to those people that were discussing hardware requirements for databases.
> This one is specific to postgresql even so...
Hay this is good. Will review it over the weekend.
Many thanks Ian
oclug mailing list
oclug at lists.oclug.on.ca
More information about the OCLUG