Skip to main content
Announcements
Qlik Introduces a New Era of Visualization! READ ALL ABOUT IT
cancel
Showing results for 
Search instead for 
Did you mean: 
eiconsulting
Partner - Creator
Partner - Creator

How does QV process the SQL flows?

I was writing a script to import a 3,5M records table. After about 800k records the system gives a memory error (I monitored the memory free decrease and it happens always at the same "level").

If I limit the SQL query at 800k and start again with the following 800k I can do it all.

When I save the file takes much much less.

Anybody can point me to some reading to explain why this happens? Nothing too difficult ... 😉

Obviously if I go on my other computer with more memory and Vista64 the limits will change but not the "process" that I'd like to understand.

Flavio

Federico Sason | Emanuele Briscolini
10 Replies
Not applicable

qlikview works directly in RAM. It keeps the data with the application itself so everytime the complete application with its all data resides in RAM when we open any qlikview application. so it depends on PC's RAM capacity

eiconsulting
Partner - Creator
Partner - Creator
Author

Thanks, actually this is more "fun" than what it appears... monitoring the actual memory use I saw there is a peak while the SQL SELECT is processed and then the system goes back to a much lower level and is able to process a new "lump" just as it was the first. So that all the records of the table (and more) fit in the memory at the end of the script (and more) but while the SELECT is preparing there is a limit. The file that is saved to disk is then even less.

I infere that a processing buffer is needed that is discarded immediately after, so that the memory used for processing each of the LOAD is unrelated with the memory needed to host the whole set of data (i.e. memory needed for 10 LOADs of 300k records each does not equal 3mil in one load). Will keep you posted, this looks very intriguing.

Flavio

P.S. Using the following trick I loaded it all

from tbldocrigdocumenti001 limit 800000;
from

tbldocrigdocumenti001 offset 800001 limit 800000;

from tbldocrigdocumenti001 offset 1600001 limit 800000; etc.



Federico Sason | Emanuele Briscolini
rwunderlich
Partner Ambassador/MVP
Partner Ambassador/MVP

This is a very intriguing question. However, I can't duplicate your results. What DBMS and driver version are you using? What's the amount of memory difference you are seeing?

LOAD 0 as zero;
SELECT * FROM mytable;

This is a pretty artifical test so it may not be that useful. I wasn't using a DBMS that provides for offset and limit, but I simulated the process by using predicates to select the data. Is there anything special in your preceeding load?

-Rob

prieper
Master II
Master II

Hi,

which QV-version are you using? We had similar problems, but they disappeared in the latest QV-version (8261).

eiconsulting
Partner - Creator
Partner - Creator
Author

Version is 8.50.6231.5 ... I am hurried to make an upgrade but 8261 is not there. Am I missing something?

Available Updates:

QlikView 8.50 (build 6261)

QlikView 8.50 x64 (build 6261)

Flavio

Federico Sason | Emanuele Briscolini
eiconsulting
Partner - Creator
Partner - Creator
Author

I also thought it very intriguing.

I am using Postgres with a PostgreSQL driver on a 32bit PC

I only use SELECT and it is the first one... I have prepared a 300k Word file to attach with screenshots and all but it takes forever and then stucks. I think the screenshots were particularly useful to describe what happens... while I find a way to upload you'll find below the script.

ODBC CONNECT TO PostgresFStest (XUserId is OCMECYdNTDdOGZVMTbFA);

select

(idazienda || '|' || idnatdocumento || '|' || idtipdocumento || '|' || escontabile || '|' || iddocumento) as pk_tDoc,

(idazienda || '|' || idmarchio || '|' || codriga) as pk_art,

idtipriga,

cprownum,

codriga,

descrizione as descr_rDoc,

quantita as qta_rDoc,

prunitario,

persconto1,

persconto2,

persconto3,

idmarchio as marchio_rDoc,

cosvenduto

from xxxDETAILS limit 800000

;

select

(idazienda || '|' || idnatdocumento || '|' || idtipdocumento || '|' || escontabile || '|' || iddocumento) as pk_tDoc,

(idazienda || '|' || idmarchio || '|' || codriga) as pk_art,

idtipriga,

cprownum,

codriga,

descrizione as descr_rDoc,

quantita as qta_rDoc,

prunitario,

persconto1,

persconto2,

persconto3,

idmarchio as marchio_rDoc,

cosvenduto

from xxxDETAILS offset 800001 limit 800000

;

select

(idazienda || '|' || idnatdocumento || '|' || idtipdocumento || '|' || escontabile || '|' || iddocumento) as pk_tDoc,

(idazienda || '|' || idmarchio || '|' || codriga) as pk_art,

idtipriga,

cprownum,

codriga,

descrizione as descr_rDoc,

quantita as qta_rDoc,

prunitario,

persconto1,

persconto2,

persconto3,

idmarchio as marchio_rDoc,

cosvenduto

from xxxDETAILS offset 1600001 limit 800000

;

select

(idazienda || '|' || idnatdocumento || '|' || idtipdocumento || '|' || escontabile || '|' || iddocumento) as pk_tDoc,

(idazienda || '|' || idmarchio || '|' || codriga) as pk_art,

idtipriga,

cprownum,

codriga,

descrizione as descr_rDoc,

quantita as qta_rDoc,

prunitario,

persconto1,

persconto2,

persconto3,

idmarchio as marchio_rDoc,

cosvenduto

from xxxDETAILS offset 2400001 limit 800000

;

select

(idazienda || '|' || idnatdocumento || '|' || idtipdocumento || '|' || escontabile || '|' || iddocumento) as pk_tDoc,

(idazienda || '|' || idmarchio || '|' || codriga) as pk_art,

idtipriga,

cprownum,

codriga,

descrizione as descr_rDoc,

quantita as qta_rDoc,

prunitario,

persconto1,

persconto2,

persconto3,

idmarchio as marchio_rDoc,

cosvenduto

from xxxDETAILS offset 3200001 limit 800000

;

select

(idazienda || '|' || idnatdocumento || '|' || idtipdocumento || '|' || escontabile || '|' || iddocumento) as pk_tDoc,

datdocumento as data_tDoc,

escontemissione,

idsoggetto as soggetto_tDoc,

idsede as sede_tDoc,

idmagazzino as mag_tDoc,

codtelaio as telaio_tDoc,

targa as targa_tDoc,

idcanvendita as canven_tDoc,

totimponibile as totimp_tDoc,

totdocumento as totdoc_tDoc,

idtipcontratto

from xxxHEADER

where flemesso = 1 AND flannullato <> 1

;

Federico Sason | Emanuele Briscolini
prieper
Master II
Master II

sorry for the typo: should read 6261, not 8261

/Peter

rwunderlich
Partner Ambassador/MVP
Partner Ambassador/MVP

Flavio,

I note one significant difference between your script and my testing -- and the way I normally write a script. I always use a preceeding load. Perhaps with a preceeding load the fields get some QV storage optimizations as the table is built. I don't have time to test this at the moment, but it should be easy for you to try.

Your minimal preceeding load would look like this:

LOAD * ;
SELECT ...

I also see you are creating some string compound keys. While not related to your initial question, you can reduce the memory requirements significantly of the runtime app (and maybe the load) by using decimal keys. Create the keys in the preceeding load using autonumber:

autonumber(idazienda || '|' || idnatdocumento || '|' || idtipdocumento || '|' || escontabile || '|' || iddocumento, 'key1') as pk_tDoc

If you get a chance to try it, let me know if using a preceeding LOAD * changes the memory profile.

-Rob

eiconsulting
Partner - Creator
Partner - Creator
Author

The LOAD * didn't change the behavior (I had thought that there would an implicit LOAD if you omit it ??? - I was going to revise this "policy" of mine and "name" the QV tables as I saw in the demo files, also to use them for JOINs).

The new version neither changes anything (did that separately).

About the keys it's a very useful hint, I kept them readable adding even the pipe sign in between to understand how the tables connect. I surely will change them... and read the documentation to understand what autonumber really does.

The mistery lives on ... I tried again attaching the screenshot but had to give up. I have a big demo at an exhibition and will do with the split SELECT, as soon as I am finished I have to solve this problem.

Thank you, Rob

Flavio

Federico Sason | Emanuele Briscolini