ERROR 11/19/12 02:26:09.348 DGRAPH {dgraph,update} Record change references unknown dimension value id 4294952004. [rec_spec=`p_record_id:1202′]This error usually indicates a failure in an earlier partial-update operation which attempted to define a new autogenerated dimension value (with ID “4294952004”, in the example). Because of this failure, this dimension value ID, which is defined in the Forge autogen state file, is never defined in the MDEX Engine to which the partial update is applied.
The upshot of this earlier failure is that the MDEX Engine has no definition for the associated dimension value ID; as a result, subsequent record-update operations which reference this ID, such as adds or updates of records tagged with the new dimension value, will also fail with the above error.
Forge will not include the failed dimension-value update operation in later updates: it has no way of knowing which operations failed on application to the MDEX Engine.
Re-applying just the dimension-value definition operation (by copying the update file back into the MDEX Engine’s updates directory and issuing an admin?op=update command) is not normally practical: the MDEX Engine will not apply further updates from a file which has already been read in once. And Forge will not include the dimension-value definition in later updates: because the ID is already present in the Forge autogen state file, Forge believes that it’s already defined and that it can be relied upon for further record updates. So a baseline update is generally the best approach for ensuring that all dimension value IDs defined in the Forge state are also defined in the MDEX Engine.
Troubleshooting large MDEX Engine response sizes
While troubleshooting a performance issue with Endeca, your Cheetah analysis might indicate that the worst queries all have large response sizes (“large” is a relative term, but it is best practice for an MDEX Engine’s response size to be no larger than approximately 200-300 Kb). The following example from a Dgraph request log is an example of a large response size:
1173115134 127.0.0.1 13172736 493811.11 463253.40 200 1933 -2 9 /graph?node=33&group=0&offset=0&nbins=25&irversion=480
The third column from the left shows the byte size of the response. In this example, the byte size of this query is 13172736, which equals to just over 13 MB and is causing poor performance because of its large size.
Note: In 6.x, the response size column is now 5th in position.
The following is a list of troubleshooting questions and resolutions that you can take to determine why the response size is large:
1. Is “nbins” too high? This is the maximum amount of records that the MDEX Engine will return in the response for this query and can be found in the request log entry for a given query. In the example above, the nbins=25 statistic is reasonable. If nbins=1000 were in a request log query (for example), this is likely to be too high and needs to be restricted in the API code via the ENEQuery.setNavNumErecs() method (from the Java API, in this example) and paging control methods need to be used. For information on these paging control methods, see the section “Paging through a record set” in the “Endeca Developer’s Guide”.
2. Is there a merchandising rule being returned that is promoting a large amount of rules? It is possible to check the result of a query in an Endeca reference application to see the supplemental objects returned (which would contain the merchandising data) and how many records the object contains. It is also possible to use the Dgraph –merch_debug flag and verify this information in the Dgraph error log.
In the case that too many records are being returned with a rule, it is likely the rule’s Style “Max Records” setting needs to be lowered to return no more than “x” amount of records.
3. Are large properties (such as long description fields or a crawled document’s Endeca.Document.Text property) being returned in the record list? It is generally not recommend to include properties like this in the record list, but only within the individual record page, as it allows for a potentially large document to be returned unnecessarily in the list of results.
To avoid returning large property files in the record list, open your project in Developer Studio and use the “Show with Record List” checkbox in the Dimension or Property editor for a given Dimension or Property. It is also recommended to use snippeting to expose portions of large properties on the record list.
To Baseline Or Not To Baseline
Is there a maximum depth for a hierarchical dimension?
Are dimension value properties searchable?
No, by default properties on a dimension value are not searchable.
Connect kubernetes pod to a GCS bucket using JS
To connect from a Kubernetes pod to a Google Cloud Storage (GCS)…Easiest way to run an LLM locally on your Mac
I recently sought an efficient method for local experimentation with Language Model…EKS cluster using an existing VPC
The eksctl command line tool can create a cluster by either command-line options or…kubectl Unable to connect to the server
When working with Kubernetes if you are getting Unable to conntect to…
Geeting Server component ‘DgraphA1’ failed to start in logs but dgraph is started with applied changes
[11.15.13 10:42:25] INFO: Uncompressing Index Files ‘1’ using gzip. [11.15.13 10:42:25] INFO: [LiveMDEXHostA] Starting shell utility ‘inflate_dgraph_input_DgraphA1’. [11.15.13 10:44:39] INFO: [LiveMDEXHostA] Starting component ‘DgraphA1’. [11.15.13 10:44:39] SEVERE: Server component ‘DgraphA1’ failed to start. Refer to component logs in /data/endeca/apps//./logs/dgraphs/DgraphA1 on host LiveMDEXHostA. Occurred while executing line 14 of valid BeanShell script: [[
11| log.info(“[ Index (Partial Update Aware)] DistributeIndexAndApply: Distributing Index and Updating Authoring LiveDgraphs”); 12| LiveDgraphCluster.cleanDirs(); 13| LiveDgraphCluster.copyIndexToDgraphServers(); 14| LiveDgraphCluster.applyIndex(); 15| 16| 17| log.info(“[ Index (Partial Update Aware)] DistributeIndexAndApply: Done.”);
]]
[11.15.13 10:44:39] INFO: Sending an exception message [11.15.13 10:44:39] INFO: Mail sent successfully [11.15.13 10:44:39] SEVERE: Error executing valid BeanShell script. Occurred while executing line 84 of valid BeanShell script: [[
81| 82| } catch (Exception e) { 83| MailerComponent.sendUrgentExceptionMsg(“CSCATBE05.perfectfitgroup.local “+appName + ” Baseline update script – Failure”, e); 84| throw e; //re-throw exception 85| } 86| 87| } else {
]]
[11.15.13 10:44:39] SEVERE: Caught an exception while invoking method ‘run’ on object ‘BaselineUpdate’. Releasing locks.
Caused by java.lang.reflect.InvocationTargetException sun.reflect.NativeMethodAccessorImpl invoke0 – null Caused by com.endeca.soleng.eac.toolkit.exception.AppControlException com.endeca.soleng.eac.toolkit.script.Script runBeanShellScript – Error executing valid BeanShell script. Caused by com.endeca.soleng.eac.toolkit.exception.AppControlException com.endeca.soleng.eac.toolkit.script.Script runBeanShellScript – Error executing valid BeanShell script. Caused by com.endeca.soleng.eac.toolkit.exception.EacComponentControlException com.endeca.soleng.eac.toolkit.component.ServerComponent startInParallel – Server component ‘DgraphA1’ failed to start. Refer to component logs in /data/endeca/apps//./logs/dgraphs/DgraphA1 on host LiveMDEXHostA.
[11.15.13 10:44:39] INFO: Released lock ‘update_lock’. Fri Nov 15 10:44:39 EST 2013 [endeca@CSCATBE05 control]$
Please suggest a way to remove this error.
Hi Ritu,
Before applyIndex, please check that your dgraph files are gunzip properly.
Thanks
MS
Hi MS,
If I got any ERROR message in DGRAPH {dgraph,update} like the one you have mentioned above, so my concern is how the partial script or baseline script will know the failure of such update because. We do not see any exception or error in logs of partial script or baseline script but it logged ERROR in Dgraph update log.
Is there any way to get the status like update is failed due to some reason without parsing the Dgraph logs.
Please help me to get the latest update actual status.
Thanks,
Tasneem
Hi Tasneem,
Can you provide your logs to identify what’s going on with your updates.
Thanks
MS
Hi MS,
My concern is not about type of error coz I am able to figure it out but the problem is if DGRAPH logs an error in its log like ERROR 11/19/12 02:26:09.348 DGRAPH {dgraph,update} (something …something) then in that case Endeca Partial Script should fail coz update has not succeeded and the error is logged in DGRAPH. But when I run ./partial-update.sh it does not report any error but the error is logged in DGRAPH. So I want to know do we have something like flag or status which can actually report that update has not been succeeded whenever DGRAPH logs an error.
Thanks,
TASNEEM