[Info-ingres] When is a table just to damn big?

Martin Bowes martin.bowes at ndph.ox.ac.uk
Wed Nov 7 15:39:59 UTC 2018


And it just got wierder...

On Ingres11 I made a data file with 2172364081 rows and tried tocopy them into a table partitioned 32 ways as I mentioned before....

copy pidrun(
        run_id      = c0tab,
        encoded_id  = c0tab,
        group_id    = c0tab,
        status      = c0nl
) from '/dbbackup/IT/ingres/pidrun.dat'
with row_estimate = 2172364081
Executing . . .

E_SC0206 An internal error prevents further processing of this query. 
    Associated error messages which provide more detailed information about
    the problem can be found in the error log, II_LOG:errlog.log
    (Wed Nov  7 15:32:02 2018)

And in the backend...
DBING1_NDPH_OX_AC_::[35028             , 16098     ,  00007f190ffd03c0, cutinterf.c:1645      ]: Wed Nov  7 15:31:57 2018 E_CU0110_TO_MANY_CELLS    A request was passed to CUT to read/write 1017980944 cells from buffer *** ERslookup() ERROR: Missing or bad parameter for this message. ***
DBING1_NDPH_OX_AC_::[35028             , 16098     ,  00007f190f9eb3c0, qeucopy.c:2487        ]: ULE_FORMAT: qeferror.c:1000  Couldn't look up message 130110 (reason: ER error 10903)
E_CL0903_ER_BADPARAM    Bad parameter
DBING1_NDPH_OX_AC_::[35028             , 16098     ,  00007f190f9eb3c0, qeucopy.c:2487        ]: Wed Nov  7 15:32:02 2018 E_QE009C_UNKNOWN_ERROR    Unexpected error received from another facility.  Check the server error log.
DBING1_NDPH_OX_AC_::[35028             , 16098     ,  00007f190f9eb3c0, scsqncr.c:14225       ]: Wed Nov  7 15:32:02 2018 E_SC0216_QEF_ERROR    Error returned by QEF.
DBING1_NDPH_OX_AC_::[35028             , 16098     ,  00007f190f9eb3c0, scsqncr.c:14226       ]: Wed Nov  7 15:32:02 2018 E_SC0206_CANNOT_PROCESS   An internal error prevents further processing of this query.
 Associated error messages which provide more detailed information about the problem can be found in the error log, II_LOG:errlog.log
DBING1_NDPH_OX_AC_::[35028             , 16098     ,  00007f190f9eb3c0, scsqncr.c:14226       ]: PQuery: copy pidrun( run_id = c0tab, encoded_id = c0tab, group_id = c0tab, status = c0nl ) from '/dbbackup/IT/ingres/pidrun.dat' with row_estimate = 2172364081
DBING1_NDPH_OX_AC_::[35028             , 16098     ,  00007f190f9eb3c0, scsqncr.c:14226       ]: Query:  copy pidrun( run_id = c0tab, encoded_id = c0tab, group_id = c0tab, status = c0nl ) from '/dbbackup/IT/ingres/pidrun.dat' with row_estimate = 2172364081
DBING1_NDPH_OX_AC_::[35028             , 16098     ,  00007f190f9eb3c0, scsqncr.c:14226       ]: LQuery: Execute qrtxthlp22                     

Marty

-----Original Message-----
From: Karl and Betty Schendel [mailto:schendel at kbcomputer.com] 
Sent: 05 November 2018 15:36
To: info-ingres at lists.planetingres.org
Subject: Re: [Info-ingres] When is a table just to damn big?


> On Nov 5, 2018, at 10:20 AM, Martin Bowes <martin.bowes at ndph.ox.ac.uk> wrote:
> 
> Hi Karl,
> 
>> That's a weird message. 
> Yep.
> 
>> Do you need the table key to be btree?  Can it be hash? Hash partitioning doesn't necessarily play well with a btree index.
> We are doing lots of inserts into it and it's a big table so I suspect a hash structure would be nasty.

Lots of inserts into a hash isn't necessarily a bad thing, especially when the table
is large.  I did a fair amount of investigation on this subject back when Netsol ran
the DNS registry on Ingres.  As long as the keys hash reasonably well the overflow
chains average out to be something like 1.1 pages long for short records and
weekly modifications.

Karl

_______________________________________________
Info-ingres mailing list
Info-ingres at lists.planetingres.org
https://lists.planetingres.org/mailman/listinfo/info-ingres



More information about the Info-ingres mailing list