Dimensioning of the Database (4.3)
The parameters included in this section are depending on the amount of BucketDataHolders that is going to be stored in the database, and they may be dimensioned as follows:
Parameter | Description |
---|---|
DataMemory | DataMemory is the amount of memory used for storing the data. This amount of physical memory will be required for each of the data nodes in, addition to memory required for OS etc, and is calculated as follows: (40b + average usage per BucketDataHolder) x NumberOf BucketDataHolder = DataMemory The maximum size of a BucketDataHolder is 14 kb. However, if you use 5 buckets per BucketDataHolder, and 1-2 sessions, you may expect a size of about 1-2 kb. Example - DataMemory If you plan to have 50 000 BucketDataHolders with an average usage of 2 kb, the required DataMemory will be: (40b + 2048) x 50 000 = 104 400 000 bytes |
IndexMemory | IndexMemory is the amount of memory used for the index of each table/entry. This data will also be stored on the data nodes, which means that this amount of memory needs to be available on each data node. Usually, this parameter is configured to be 1/5 or 1/6 of the DataMemory. However, the required amount of memory may also be calculated as follows: 20b x TotalNumberOfRecords (one record for each BucketDataHolder) = IndexMemory Example - IndexMemory If the DataMemory is configured to be 104 400 000 bytes, as in the example for DataMemory, configuring the IndexMemory to 1/5 would be 20 880 000 bytes, or to 1/6 would be 17 400 000 bytes. If the IndexMemory is calculated based on the number of records, and you plan to have 50 000 BucketDataHolders, the required IndexMemory will be: 20b x 50 000 = 1 000 000 bytes |
FragmentLogFileSize and NoOfFragmentLogFiles | These two parameters in combination make out the total amount of redo space. The redo space is the amount of space required for keeping a copy of all changes made to the database on disk in order to make it possible to recover the database after a crash, for example. The redo space should be about six times the available data memory and is calculated as follows: 4 x FragmentLogFileSize x NoOfFragmentLogFiles = 6 x DataMemory The reason for multiplying with four is that the number of configured files times four are created for the backup. NoOfFragmentLogFiles has a minimum value of 3, and it is generally recommended to set FragmentLogFileSize to 256. Example - NoOfFragmentLogFiles If you have 10 000 M in DataMemory, you will need 60 000 M in redo: 4 x 256 M x NoOfFragmentLogFiles = 60 000 M which in turn means that the NoOfFragmentLogFiles should be at least 59 |
DiskCheckPointSpeed | The DiskCheckPointSpeed is used for specifying the number of Mb per second that should be written to disk during local checkpoint (a complete backup to disk of the database which is done on a regular basis). When configuring this parameter, the amount of data in the database as well as the speed of the hard drive should be considered. |
MaxNoOfConcurrentTransactions | With this parameter you can specify the maximum number of concurrent transactions that may be performed. Each thread will use at least one transaction during load, but typically this parameter can be set higher. The recommended setting is 1024. However, this may need to be adjusted during functional and performance testing. |
MaxNoOfConcurrentOoperations | With this parameter you can specify the maximum number of concurrent operations. One operation constitutes one update of one line, which means that deleting 10 000 entries will result in 10 000 concurrent operations. This parameter may be set to 32 768 and then be adjusted during functional and performance testing. |
TimeBetweenGlobalCheckPoints | This parameter determines how often the redo log should be flushed to disk. The default value is 2 000 ms and the recommended value is 1 000 ms. |
TimeBetweenLocalCheckPoints | With this parameter you can specify how often you want to make a local checkpoint. By default this is set to 20, which is usually a reasonable configuration. The calculation is based on the amount of changes, which means that if this parameter is configured to 20, the effect will be as follows: 4 x 2^20 = 4 Mb A rewrite will be made when at least 4 Mb of changes have been written to the database. If less has been written, then a local checkpoint will be made after 57 minutes. Only one rewrite will be active at the time, which means that even if there is more traffic this configuration will still work. |