Skip to main content
Announcements
Qlik Connect 2024! Seize endless possibilities! LEARN MORE
cancel
Showing results for 
Search instead for 
Did you mean: 
markperrone
Contributor III
Contributor III

Cross application communication (share bookmarks)

We have a large 120M row app that we need to break up into analytics and detail.  We're loading the detail as usual and in the analytics we use binary load from the analytics qvf and drop the fields.  Now we need to create a bookmark on the analytics side then have a way for the detail app to use the same bookmark or at least copy the bookmark.  The analytic fields are a proper subset of the detail app.

We tried ODAG last year but its interface was very cumbersome and (sorry but) the scripting requirements were even worse.  The idea we're using above is very fast and requires the least amount of scripting.

We also tried document chaining by passing current selections in the detail app url but that proved to be unreliable as it worked only sometimes.

We're investigating API but not having much luck.  I thought there would be something like

detailapp = qlik.app(detail guid)

detailapp.bookmark.create(name,this.currentselections,ignorelayout)

These are client-managed apps.  We are not looking for a cloud solution

Labels (3)
1 Solution

Accepted Solutions
markperrone
Contributor III
Contributor III
Author

Thanks for the response and the suggestions.  We investigated the single config option but results were similar to the URL method.  It worked only sometimes and read recently that single config will be deprecated in a future release (we're a few releases behind current).  I'm a bit perplexed that Qlik would implement this useful technique for Qlik Cloud and Qlikview but leave the solution out of Qlik sense enterprise.  The analytics data model is a subset of the detail data model so its not a question with mismatched data models.   When everything was in a single app performance was quite bad.

View solution in original post

5 Replies
marcus_sommer

I could imagine that the document-chaining should be the most auspicious approach especially in regard to the efforts. It it wasn't reliable enough in your testing it might be related to a not sufficient synchronizing of the logic and usability within the applications - not using the identical fields and/or any (calculation) conditions, variables, actions and so on and/or not transferring all selected fields and/or calling not existing respectively invalide field-values and/or having a data-structure in which order of the selections may have an impact on the final selection state.

Most of the above mentioned possible causes needs to be considered regardless which method to sync the application-access would be used. This means the task seems to be more simple as it is in the reality especially if the applications weren't designed to it at the beginning. Nevertheless it might be a simplification if not (only) single selections are transferred else bookmarks - here a starting point: Solved: Calling the Bookmarks in the qliksense with a url - Qlik Community - 941640

Beside this it might be more expedient to skip the entire approach and keeping everything within a single application because quite probably has the detailed one far more as 90% of the common data-set and the possible extra data may not make a significantly difference from a performance point of view. 

markperrone
Contributor III
Contributor III
Author

Thanks for the response and the suggestions.  We investigated the single config option but results were similar to the URL method.  It worked only sometimes and read recently that single config will be deprecated in a future release (we're a few releases behind current).  I'm a bit perplexed that Qlik would implement this useful technique for Qlik Cloud and Qlikview but leave the solution out of Qlik sense enterprise.  The analytics data model is a subset of the detail data model so its not a question with mismatched data models.   When everything was in a single app performance was quite bad.

marcus_sommer

If I interpret your statement right then the dashboard- and the detail-application are working separately sufficient from a performance point of view but not as a data-set within a single application. This hints in the direction of a not suitable data-model in which one isn't a real subset of the data-set else more that two data-sets are linked in any way.

Beside possible impacts on the performance this may also lead to side-effects in regard to synchronize the selection state. Therefore a check and some adjustments to the data-model might be more expedient. Maybe helpful in this regard might be the idea behind the Fact Table with Mixed Granularity - Qlik Community - 1468238.

markperrone
Contributor III
Contributor III
Author

The analytics is built from the the detail app QVF then we keep only the fields needed for calculations and filtering.  There are a set of about 8 fields that are common (same name, content and formatting) between the two apps.  Like you said each side performs better separately than when it was all one large app.  The problem lies in trying to get the two apps to talk to each other.   From the community seemed like the accepted solution was passing filters in the URL but as I mentioned the results were inconsistent. Feedback is appreciated but I think we'll have to chalk this up to "not supported at present"  I've seen a few asks in the ideation group for this feature.  Considering cloud has this implemented I would think it would not be difficult with sense

marcus_sommer

I think it means that both apps have the same starting-point but are quite different apps with just a few overlapping. This must not be a show-stopper for a document-chaining but it could make the communication more complicated and is IMO not the most suitable approach to implement such logic.

Personally I would start with a single application which contained all data with the lowest needed granularity and which covers all requirements of the views and very likely it would be based on a star-scheme data-model which includes all essential logic that for nearly everything simple expressions like sum() and count() are enough without the need to use synthetic dimensions with/without aggr() and/or aggr() and (nested) if-loops within the expression. And by using a reduced and fixed + curated data-set could increase the speed of the development.

If this is mainly finished it makes sense to check the performance and optimizing this and that and only if the performance couldn't be sufficiently improved it would be sensible to look for further measures like the document-chaining. To split the task in two applications I would start by keeping the entire logic and just removing fields and/or aggregating data to get the data on a higher level. It goes more in the direction of having an application in two versions as of creating different ones.

Beside the above I suggest to use only native selections as usability and no magic stuff with actions regardless of the used triggers or alternate states which are impacting any selection. Also all used expressions/variables must be matching together as well as with the existing data-set, for example having the same variable of: max(Date) but the result may differ in regard to the data-set may lead to differences for all calculations which are referring to it. Ensuring such requirements is not always easy especially if it are different applications. Like already hinted it's not only related to the document chaining else all other approaches needs to be development as an unit.

Further important is the understanding that the transferred selections could not directly mirror a selection state else it are multi selections which are applied in a more or less random order and therefore if the various selections have any overlapping to each other they may result in an inconsistent behaviour. A workaround to it might be not to transfer all selections else to transfer sometimes the deriving values of other fields as selection-values to the target. This could mean not the selected periods + channel + categories + customers are transferred else the possible order-id's to this selection.