Skip to main content
Announcements
Have questions about Qlik Connect? Join us live on April 10th, at 11 AM ET: SIGN UP NOW
cancel
Showing results for 
Search instead for 
Did you mean: 
Not applicable

QlikView Data Architectures

Hi,

With reference to this doc - http://community.qlik.com/docs/DOC-1952

Has anyone tried N-tiers data architecture in your developments? or starting a QlikMart?

I'm trying to collect some info and ideas on how are you guys building your data architectures or ETL, concerns, best practices, etc...

Any thoughts are welcome.

Thanks.

13 Replies
Not applicable
Author

Hi Nick,

We've used mostly 2-tiered QVD architecture for our bigger deployments. It's very useful for data reuse (when you have some sets of data, e.g. sales data, being used in more than one applications).

I suppose 3-tiered architectures will make sense if you have very large data sets and want to increase reload performance or minimize load to the operational systems.

--Kennie

Not applicable
Author

Hi Kennie,

Thanks for your feedback.

For the ease of maintenance, i'd go for 2-tiered as well.

So for doing that, you will have some empty QVWs to STORE data into QVDs.

  1. But, do you schedule them in publisher and run one after another? or all of them in parellel to reduce total refresh time?
  2. do you have each extract in individual QVW file? or in different tab? but i think we can load in parellel if they are in tabs?

I'm looking for more info about big deployment, such as having all kind of indicators in one QVW and having a lot of QVDs. Could you share some of your experiences you have encountered?

Thanks.

Not applicable
Author

Hi Nick,

Depends on the way you set it up, really. If all the QVDs are in one tier and you have multi-threaded source systems then you can have them all in parallel.

We do separate the QVD generator files for each data source for ease of maintenance. Since then if one of them doesn't work it's easier to pinpoint as well as you don't have to take down all data sources while fixing it.

But on one occassion we had the QVD generator files to run one after another since some of the data from one must be feed into the other for mapping purposes.

Usually doing them all in parallel make sense if your reload tasks are only scheduled during off-peak times (since it will utilize more processing power from the source systems). But if you have to reload during the day, besides doing incremental reload, doing them one-by-one will reduce load on the source systems. Of course on most cases it will make the reload time longer, so you have to consider if you can put higher load on a narrower time frame, or lower load for longer time.

--Kennie

johnw
Champion III
Champion III

We use the 2-Tiered QVD Architecture for the most part.  I have more recently started moving towards a 3-Tiered QVD Architecture in some cases, though my "Transformation Layer" will also read databases directly as required.  I've not seen the need to separate the two tiers fully, with the transformation layer reading ONLY from QVDs.  Our user QVWs, though, typically read only from QVDs, though not in all cases.

While I don't use it, I'm a fan of the 3-Tiered Mixed Architecture for more complicated deployments than our own, or where you have power users that want to make their own QVWs, but don't want to be bothered with script details.

For the 2-Tiered QVD Architecture, we run some of the extract layer QVWs in parallel and some in sequence, whatever makes the most sense given system load and other factors.  We also often create multiple QVDs from a single QVW, typically dimension tables closely associated with a main fact table, but not so closely that we want to join all of the tables together.

I don't know how big our deployment is in comparison to others, but I'd guess we have 100-200 QVDs, and it looks like we have 139 distinct QVWs that are in current use.

Not applicable
Author

Hi,

I tried to open the document you refered to, but it denies my access. Can any kind soul attach a copy here?

Regards,

Xue Bin

Not applicable
Author

In attached...

Not applicable
Author

Thank you Nick:)

Not applicable
Author

Hi John:

HOw do you handle large QVW?   We currently have a very straight forward seup where we have a QVD generater and QVW to pull everything together.  The end result is a 2GB QVW.  The reload takes less than 3 minutes,  but the distribution takes an hour long.

How can we speed this up or is there some setting I need to set to increase the buffer size when Publisher makes a copy of the QVW to the UserDocument folder?

We are a small company and we are only distributing it to all "authenticated users", so only one copy of the file is generated at 2GB

Regards

-Mike

johnw
Champion III
Champion III

Our biggest QVW is about 600 MB, and I can see that the last time it ran, it took about four and a half minutes to distribute.  As with your case, that particular file has only a single distributed copy.  Scale that up, and I'd expect about 15 minutes for a 2 GB document on our hardware.

As an experiment, I tried copying the file back to my PC.  It took about one and a half minutes.  I'm not sure where the other three minutes are going, honestly.  There's a little more to it than just a copy, but I wouldn't think three minutes worth.

Anyway, I'm unfortunately no QlikView Publisher or server expert.  I don't know what you might do to speed it up because I don't know what's slowing it down.  Since I know very little about Publisher, I'd probably personally start by taking a look at what's going on on the hardware at the time of distribution.  But you may be well ahead of me in terms of knowing what to look for.