- Approximately 1400 records per second was inserted.
- The size of DB did not effect insertion rate; i.e. insertion rate did not reduce as the DB got larger.
- Maximum memory usage was about 2 GB.
- The DB required about 6GB of disk space (includes index files).
Monday, November 10, 2008
DB Record Insertion Rate: A Trivial Experiment
I was recently involved in preliminary investigation for a new project that requires (at least what I thought was) a very high rate of record insertion into a database. In a nutshell, the ability to insert a minimum of 200 records a second up to a maximum of 2000 records a second is required. Even though the records are small, at 80 bytes each, spread across 4 fields, I wasn't confident the DB I had at my disposal would meet these requirements. So I ran a quick experiment.
My experiment involved writing a Groovy script that was used to insert 17 million records in a Microsoft SQL Server 2000 DB, both running on my laptop. My laptop has an Intel Core 2 Duo (1.8Ghz) CPU and 3 GB of ram. Each record consisted of 120 bytes, which was split into 5 fields when inserted into the DB. The script inserted the records in batches of 1000 using SQL similar to the following (with 'ValueX' replaced by content to fill out the required 120 bytes) [See Note 1]:
INSERT INTO Messages (Col1, Col2, Col3, Col4, Col5
SELECT 'Value1' , 'Value2', 'Value3', 'Value4', 'Value5'
UNION ALL
SELECT 'Value1' , 'Value2', 'Value3', 'Value4', 'Value5'
UNION ALL
SELECT 'Value1' , 'Value2', 'Value3', 'Value4', 'Value5'
UNION ALL
SELECT 'Value1' , 'Value2', 'Value3', 'Value4', 'Value5'
....
Results: