Bill Pyne
Posts: 165
Nickname: billpyne
Registered: Jan, 2007
|
|
Re: Database Throughput
|
Posted: Jul 13, 2007 9:41 AM
|
|
> We have quite a few customers with applications that have > the "5000 events per second" requirement, and a small > number over the 100,000 mark. Some of these are "peak > load" numbers (e.g. markets that see bursts of activity at > open and close) and some of these are sustained (non-stop > load of thousands of transactions / operations / events / > etc. per second).
Thank you for the response. I haven't run into volume requirements that steep before, so it sometimes seems unreal. Still, I would like to know the percentage of data persisted applications requiring that throughput. I believe it's a small percentage but have no figures to back it up.
> The reasons why a database may or may not be suitable for > a task isn't just raw speed. Sometimes it has to do with > the isolation requirements, or the serial versus parallel > nature of the workload, or the ability to partition the > load, or the durability requirements, etc. A database, for > example, may impose too high a cost for serial > transactions against the same small set of data, due to > the isolation+durability requirement. (Serial + isolated + > disk durable transactions create a maximum throughput > limiter tied directly to the transaction latency, and if > the transactions are driven out-of-process, that latency > becomes even more significant.) > > Peace, > > Cameron Purdy > http://www.oracle.com/technology/products/coherence/index.h > tml
Great point.
|
|