Oracle Endeca some common question and errors
ERROR 11/19/12 02:26:09.348 DGRAPH {dgraph,update} Record change references unknown dimension value id 4294952004. [rec_spec=`p_record_id:1202′]This error usually indicates a failure in an earlier partial-update operation which attempted to define a new autogenerated dimension value (with ID “4294952004”, in the example). Because of this failure, this dimension value ID, which is defined in the Forge autogen state file, is never defined in the MDEX Engine to which the partial update is applied.
The upshot of this earlier failure is that the MDEX Engine has no definition for the associated dimension value ID; as a result, subsequent record-update operations which reference this ID, such as adds or updates of records tagged with the new dimension value, will also fail with the above error.
Forge will not include the failed dimension-value update operation in later updates: it has no way of knowing which operations failed on application to the MDEX Engine.
Re-applying just the dimension-value definition operation (by copying the update file back into the MDEX Engine’s updates directory and issuing an admin?op=update command) is not normally practical: the MDEX Engine will not apply further updates from a file which has already been read in once. And Forge will not include the dimension-value definition in later updates: because the ID is already present in the Forge autogen state file, Forge believes that it’s already defined and that it can be relied upon for further record updates. So a baseline update is generally the best approach for ensuring that all dimension value IDs defined in the Forge state are also defined in the MDEX Engine.
Troubleshooting large MDEX Engine response sizes
While troubleshooting a performance issue with Endeca, your Cheetah analysis might indicate that the worst queries all have large response sizes (“large” is a relative term, but it is best practice for an MDEX Engine’s response size to be no larger than approximately 200-300 Kb). The following example from a Dgraph request log is an example of a large response size:
1173115134 127.0.0.1 13172736 493811.11 463253.40 200 1933 -2 9 /graph?node=33&group=0&offset=0&nbins=25&irversion=480
The third column from the left shows the byte size of the response. In this example, the byte size of this query is 13172736, which equals to just over 13 MB and is causing poor performance because of its large size.
Note: In 6.x, the response size column is now 5th in position.
The following is a list of troubleshooting questions and resolutions that you can take to determine why the response size is large:
1. Is “nbins” too high? This is the maximum amount of records that the MDEX Engine will return in the response for this query and can be found in the request log entry for a given query. In the example above, the nbins=25 statistic is reasonable. If nbins=1000 were in a request log query (for example), this is likely to be too high and needs to be restricted in the API code via the ENEQuery.setNavNumErecs() method (from the Java API, in this example) and paging control methods need to be used. For information on these paging control methods, see the section “Paging through a record set” in the “Endeca Developer’s Guide”.
2. Is there a merchandising rule being returned that is promoting a large amount of rules? It is possible to check the result of a query in an Endeca reference application to see the supplemental objects returned (which would contain the merchandising data) and how many records the object contains. It is also possible to use the Dgraph –merch_debug flag and verify this information in the Dgraph error log.
In the case that too many records are being returned with a rule, it is likely the rule’s Style “Max Records” setting needs to be lowered to return no more than “x” amount of records.
3. Are large properties (such as long description fields or a crawled document’s Endeca.Document.Text property) being returned in the record list? It is generally not recommend to include properties like this in the record list, but only within the individual record page, as it allows for a potentially large document to be returned unnecessarily in the list of results.
To avoid returning large property files in the record list, open your project in Developer Studio and use the “Show with Record List” checkbox in the Dimension or Property editor for a given Dimension or Property. It is also recommended to use snippeting to expose portions of large properties on the record list.
To Baseline Or Not To Baseline
Is there a maximum depth for a hierarchical dimension?
Are dimension value properties searchable?
No, by default properties on a dimension value are not searchable.
Connect kubernetes pod to a GCS bucket using JS
To connect from a Kubernetes pod to a Google Cloud Storage (GCS)…Easiest way to run an LLM locally on your Mac
I recently sought an efficient method for local experimentation with Language Model…EKS cluster using an existing VPC
The eksctl command line tool can create a cluster by either command-line options or…kubectl Unable to connect to the server
When working with Kubernetes if you are getting Unable to conntect to…