<html><head></head><body>Hi,<br /><br />I'm looking for some advice on cache sizes in Ingres.<br /><br />The following is the output of "top" for my current Ingres installation:<br /><br />
<div> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND</div>
<div> 4601 ingres 20 0 7430084 2.144g 33984 S 431.3 7.8 1985:18 iidbms</div>
<div> </div>
<div><span style="font-size: 11pt;">If I use pmap to look at the memory usage of the process, a la "pmap -x 4601" it seems Ingres is using </span><span style="font-size: 14.6667px;"> 7430084kb</span><span style="font-size: 11pt;"> of memory, which is...</span><span style="font-size: 14.6667px;">8gb-ish.</span></div>
<div><span style="font-size: 14.6667px;"><br /></span>I have a lot more than 8GB of RAM, though. I've got 32GB, and I'd like to use a much larger portion of the available memory for logging. I could pretty easily field 96GB if there was a reason to.<br /><br />I currently have the cache guideline for each of the caches set to "huge", which is the largest of the guidelines.<br /><br />2k:<br />
<div> │cache_guideline │huge │string │</div>
<div> │dmf_group_size │8 │data pages │</div>
<div> │dmf_write_behind │ON │boolean │</div>
<div><br />4k:<br />
<div> │cache_guideline │huge │ │</div>
<div> │dmf_group_size │8 │data pages │</div>
<div> │dmf_separate │OFF │boolean │</div>
<div> │dmf_write_behind │ON │boolean │</div>
<br />8k:<br />
<div> │cache_guideline │huge │ │</div>
<div> │dmf_group_size │8 │data pages │</div>
<div> │dmf_separate │OFF │boolean │</div>
<div> │dmf_write_behind │ON │boolean │<br /><br /></div>
16k:<br />
<div> │cache_guideline │huge │ │</div>
<div> │dmf_group_size │8 │data pages │</div>
<div> │dmf_separate │OFF │boolean │</div>
<div> │dmf_write_behind │ON │boolean │</div>
<br />32k:<br />
<div> │cache_guideline │huge │ │</div>
<div> │dmf_group_size │8 │data pages │</div>
<div> │dmf_separate │OFF │boolean │</div>
<div> │dmf_write_behind │ON │boolean │<br /><br /></div>
64k:<br />
<div> │cache_guideline │huge │ │</div>
<div> │dmf_group_size │8 │data pages │</div>
<div> │dmf_separate │OFF │boolean │</div>
<div> │dmf_write_behind │ON │boolean │</div>
<div> </div>
It -seems- like I should be able to go into the derived parameter set for each configuration section and change the dmf_cache_size parameter to specify a larger number of data pages.<br /><br />I actually tried this some time ago, though, and while I did use more memory the net effect on performance observed was actually negative. I'm not sure if my experience was atypical, though, or if there really is a reason you shouldn't just use as much memory as you can (ie: memory that isn't being used by other services on the server).<br /><br />So, some specific questions I am hoping to find answers to:<br /><br />1. Is increasing dmf_cache_size the correct approach toward the goal of consuming more memory and increasing overall read performance by not having to hit the physical disk?<br />2. Are there negative consequences to setting this value too high?<br />3. I've noticed that the output of "help table sometable;" shows a field called "cac!
he priority". If I know which of my tables I consider to be most cache-worthy, can I use this setting to purposely bias the cache in favor of the tables I care about? If so, how can I do that? I've never seen anything that indicates how I could set this. <br />4. I realize this is super-general, but what do people generally consider "good" cache hit rates? I mean, if I keep throwing more memory at the cache can I reasonably expect to drive cache hit rates up to values approaching 100% (ie: if the cache were larger than the actual database itself)?</div>
</div>
<br />-Michel<br />
<article></article></body></html>