[Info-ingres] Effective Ingres Caching
Václav Dohnal
vdohnal at uniscomp.cz
Fri Jan 5 08:13:01 UTC 2018
Hi,
it is true that increasing cache mem doesn't help behind some point. In
our case we decreased disks load by tuning qef_sort_mem and
qef_hash_mem. And by dmf_group_count as Karl mentioned.
Vasek
Dne 04.01.2018 v 23:03 Michel Forget napsal(a):
> Hi,
>
> I'm looking for some advice on cache sizes in Ingres.
>
> The following is the output of "top" for my current Ingres installation:
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 4601 ingres 20 0 7430084 2.144g 33984 S 431.3 7.8 1985:18 iidbms
> If I use pmap to look at the memory usage of the process, a la "pmap
> -x 4601" it seems Ingres is using 7430084kb of memory, which
> is...8gb-ish.
>
> I have a lot more than 8GB of RAM, though. I've got 32GB, and I'd
> like to use a much larger portion of the available memory for
> logging. I could pretty easily field 96GB if there was a reason to.
>
> I currently have the cache guideline for each of the caches set to
> "huge", which is the largest of the guidelines.
>
> 2k:
> │cache_guideline │huge │string │
> │dmf_group_size │8 │data pages │
> │dmf_write_behind │ON │boolean │
>
> 4k:
> │cache_guideline │huge │ │
> │dmf_group_size │8 │data pages │
> │dmf_separate │OFF │boolean │
> │dmf_write_behind │ON │boolean │
>
> 8k:
> │cache_guideline │huge │ │
> │dmf_group_size │8 │data pages │
> │dmf_separate │OFF │boolean │
> │dmf_write_behind │ON │boolean │
>
> 16k:
> │cache_guideline │huge │ │
> │dmf_group_size │8 │data pages │
> │dmf_separate │OFF │boolean │
> │dmf_write_behind │ON │boolean │
>
> 32k:
> │cache_guideline │huge │ │
> │dmf_group_size │8 │data pages │
> │dmf_separate │OFF │boolean │
> │dmf_write_behind │ON │boolean │
>
> 64k:
> │cache_guideline │huge │ │
> │dmf_group_size │8 │data pages │
> │dmf_separate │OFF │boolean │
> │dmf_write_behind │ON │boolean │
> It -seems- like I should be able to go into the derived parameter set
> for each configuration section and change the dmf_cache_size parameter
> to specify a larger number of data pages.
>
> I actually tried this some time ago, though, and while I did use more
> memory the net effect on performance observed was actually negative.
> I'm not sure if my experience was atypical, though, or if there really
> is a reason you shouldn't just use as much memory as you can (ie:
> memory that isn't being used by other services on the server).
>
> So, some specific questions I am hoping to find answers to:
>
> 1. Is increasing dmf_cache_size the correct approach toward the goal
> of consuming more memory and increasing overall read performance by
> not having to hit the physical disk?
> 2. Are there negative consequences to setting this value too high?
> 3. I've noticed that the output of "help table sometable;" shows a
> field called "cac! he priority". If I know which of my tables I
> consider to be most cache-worthy, can I use this setting to purposely
> bias the cache in favor of the tables I care about? If so, how can I
> do that? I've never seen anything that indicates how I could set this.
> 4. I realize this is super-general, but what do people generally
> consider "good" cache hit rates? I mean, if I keep throwing more
> memory at the cache can I reasonably expect to drive cache hit rates
> up to values approaching 100% (ie: if the cache were larger than the
> actual database itself)?
>
> -Michel
>
>
>
> _______________________________________________
> Info-ingres mailing list
> Info-ingres at lists.planetingres.org
> http://lists.planetingres.org/mailman/listinfo/info-ingres
--
*Václav Dohnal *
systémový inženýr
*mobil: +420 606 730 574 * *UNIS COMPUTERS, a.s.*
tel.: +420 544 528 334 Jundrovská 618/31, 624 00 Brno
Fax: +420 541 223 134 Česká Republika
e-mail: vdohnal at uniscomp.cz
<mailto:%0A%0A%0Avdohnal at uniscomp.cz%0A%0A%0A> *www.uniscomp.cz*
<https://www.uniscomp.cz>
UNIS COMPUTERS, a.s.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.planetingres.org/pipermail/info-ingres/attachments/20180105/c27d7e8d/attachment.html>
More information about the Info-ingres
mailing list