Showing posts with label AEM Performance tuning. Show all posts
Showing posts with label AEM Performance tuning. Show all posts

why do we need Seprate storage configuration for segment (node) and datastore(binary) AEM6.4

statement : Reason to separate Repository separation: Segment /node store and datastore/binary store

Solution :
Performance: 
  • In an ideal AEM repository configuration, the segment store will be stored entirely in memory. 
  • Most AEM repositories have a data store which is too large to allow this, therefore we always separate them out.
  • With the segment store stored entirely in RAM, you will see a massive increase in performance. The physical disk (Bare Metal), vDisk(VMware), EBS Volume (AWS) or Managed Disk (Azure) leveraged for the segment store should also be high performance RAID10 storage ideally SSD/Flash based with a sufficient amount of IOPS available.
  • When sizing your AEM application servers, make sure to allocate enough RAM for an appropriate sized Heap, 

Maintenance: 
  • By splitting the segment store and data store, you then have more flexible options for repository maintenances. 
  • By having a combined repository, the only way to reclaim disk space is is via a tar compaction (or RevisionGC). 
  • This can be a lengthy activity, depending on the number of changes to the repository — and in some AEM versions must be completed with the AEM instance offline.
  • This means specific publishers will be out of the load balancer pool and your author will be offline during the maintenance.
  •  Separating the data store from the segment store allows you to do an online DatastoreGC, which can be very fast and can be done with the instance still up. 
  • In newer AEM versions with repositories under 1TB these maintenance processes — when run regularly (weekly) — can reclaim tens or hundreds of GB in a few seconds or minutes.
  • Additionally, having moved to a split FileDataStore model, you will (on AEM 6.4) be able to take advantage of a new “tail compaction” maintenance task, 
  • which can be run online with even greater frequency, and only compacts data which has been added since the last compaction, giving you the advantage of always having an AEM instance which is lean and performant.

Storage flexibility
  • Separating out the segment store and data store allows you to place each on their own storage media. 
  • Segment stores can be placed on SSD volumes or Flash storage tiers for high performance, while your data store can leverage less expensive lower performance storage.
  • If leveraging AWS for you AEM deployment, you can use S3 for the data store, which is really inexpensive and can be shared between publishers and author, thus reducing storage costs even more. 
  • Splitting these as well as logging onto separate volumes allows simultaneous writes to the different disks, further improving performance and removing potential bottlenecks.


Enabling Vanity URLs with Adobe Experience Manager



Statement -  Handling Vanity URLs Using the AEM Dispatcher Module

Solution:


  • Recent updates to the AEM Dispatcher module (since version 4.1.9 of the module) allow authors to directly control vanity URLs from within the Author UI, and these are automatically pushed out to the publishers, which then expose them to the dispatchers. 

AEM

Steps to be followed:
/0100 { /type "allow" /url "/libs/granite/dispatcher/content/vanityUrls.html" }
  • Add a caching rule to prevent caching of this URL:
/0001 { /type "deny" /glob
"/libs/granite/dispatcher/content/vanityUrls.html" }
  • Add the vanity_urls configuration to the farm:
/vanity_urls {
/url "/libs/granite/dispatcher/content/vanityUrls.html"
/file "/tmp/vanity_urls"
/delay 300
}

  • Re-start Apache.
  • The file defined at the /file setting is not automatically created/updated at the time interval set at /delay, but only when a request is made that fails the /filter rules of your dispatcher. On fail, it checks to see if the file is there —
  •  if not, it will generate and use it by pulling /libs/granite/dispatcher/content/vanityUrls.html from the publisher. If it is there, and not older than /delay seconds, it will use it. Finally, if it is older than /delay seconds, it will update it from the Publish instance and use it.

AEM 6.4 Sample Architecture for Hybrid AEM and Sharded Deployment in Rackspace cloud

Hybrid AEM deployment with Rackspace Private Cloud, separate Bare Metal authors for content and asset management

aemsept2


AEM architecture diagram: sharded author with cloud publishers


aemsept3

The above diagram represents site with heavy authoring requirements so that it makes sense to shard out individual sites onto their own physical authoring environments. A Solr Cloud cluster plugs into the authors for indexing DAM assets and assisting with the custom authoring UI. The publish tier is served by Rackspace public cloud servers (OpenStack) which can easily be cloned and scaled up and down to meet load demands.

 connectivity between your author instance and Adobe Marketing Cloud




Reference : https://blog.rackspace.com/sample-architecture-diagrams-for-adobe-experience-manager

Slow Queries Development Tools in AEM

Query Development Tools

Adobe Supported

Community Supported

  • Oak Index Definition Generator
    • Generate optimal Lucence Property Index from XPath or JCR-SQL2 query statements.
  • AEM Chrome Plug-in
    • Google Chrome web browser extension that exposes per-request log data, including executed queries and their query plans, in the browser's dev tools console.
    • Requires Sling Log Tracer 1.0.2+ to be installed and enabled on AEM.

Troubleshooting Slow Queries in AEM


Statement: Slow Query Classifications


Solution :



Slow Query Classifications



There are 3 main classifications of slow queries in AEM, listed by severity:
  1. Index-less queries
    • Queries that do not resolve to an index and traverse the JCR's contents to collect results
  2. Poorly restricted (or scoped) queries
    • Queries that resolve to an index, but must traverse all index entries to collect results
  3. Large result set queries
    • Queries that return very large numbers of results.

Note: 
  • The first 2 classifications of queries (index-less and poorly restricted) are slow, because they force the Oak query engine to inspect each potential result (content node or index entry) to identify which belong in the actual result set. 
  • In AEM 6.3, by default, when a traversal of 100,000 is reached, the query fails and throws an exception. 
  • This limit does not exist by default in AEM versions prior to AEM 6.3, but can be set via the Apache Jackrabbit Query Engine Settings OSGi configuration and QueryEngineSettings JMX bean (property LimitReads).

1. Detecting Index-less Queries



During Development



Explain all queries and ensure their query plans do not contain the /* traverse explanation in them. Example traversing query plan:
  • PLAN: [nt:unstructured] as [a] /* traverse "/content//*" where ([a].[unindexedProperty] = 'some value') and (isdescendantnode([a], [/content])) */


Post-Deployment



  • Monitor the error.log for index-less traversal queries:
    • *INFO* org.apache.jackrabbit.oak.query.QueryImpl Traversal query (query without index) ... ; consider creating and index
    • This message is only logged if no index is available, and if the query potentially traverses many nodes. Messages are not logged if an index is available, but amount to traversing is small, and thus fast.
  • Visit the AEM Query Performance operations console and Explain slow queries looking for traversal or no index query explanations.

Query Performance



The Query Performance page allows the analysis of the slowest queries performed by the system. This information is provided by the repository in a JMX Mbean. 
In Jackrabbit, the com.adobe.granite.QueryStat JMX Mbean provides this information, while in the Oak repository, it is offered by org.apache.jackrabbit.oak.QueryStats.
The page displays:
  • The time when the query was made
  • The language of the query
  • The number of times the query was issued
  • The statement of the query
  • The duration in milliseconds


chlimage_1


Explain Query



For any given query, Oak attempts to figure out the best way to execute based on the Oak indexes defined in the repository under the oak:index node.
 Depending on the query, different indexes may be chosen by Oak. Understanding how Oak is executing a query is the first step to optimizing the query.
The Explain Query is a tool that explains how Oak is executing a query. It can be accessed by going to Tools - Operations - Diagnosis from the AEM Welcome Screen, then clicking on Query Performance and switching over to the Explain Querytab.
Features
  • Supports the Xpath, JCR-SQL and JCR-SQL2 query languages
  • Reports the actual execution time of the provided query
  • Detects slow queries and warns about queries that could be potentially slow
  • Reports the Oak index used to execute the query
  • Displays the actual Oak Query engine explanation
  • Provides click-to-load list of Slow and Popular queries
Once you are in the Explain Query UI, all you need to do in order to use it is enter the query and press the Explain button:


chlimage_1


The first entry in the Query Explanation section is the actual explanation. The explanation will show the type of index that was used to execute the query.
The second entry is the execution plan.
Ticking the Include execution time box before running the query will also show the amount of time the query was executed in, allowing for more information that can be used for optimizing the indexes for your application or deployment.

chlimage_1

Detecting Poorly Restricted Queries

During Development



Explain all queries and ensure they resolve to an index tuned to match the query's property restrictions.
  • Ideal query plan coverage has indexRules for all property restrictions, and at a minimum for the tightest property restrictions in the query.
  • Queries that sort results, should resolve to a Lucene Property Index with index rules for the sorted by properties that set orderable=true.

For example, the default cqPageLucene does not have an index rule for jcr:content/cq:tags


Before adding the cq:tags index rule
  • cq:tags Index Rule
    • Does not exist out of the box
  • Query Builder query
    • type=cq:Page
      property=jcr:content/cq:tags
      property.value=my:tag
  • Query plan
    • [cq:Page] as [a] /* lucene:cqPageLucene(/oak:index/cqPageLucene) *:* where [a].[jcr:content/cq:tags] = 'my:tag' */
This query resolves to the cqPageLucene index, but because no property index rule exists for jcr:content or cq:tags, when this restriction is evaluated, every record in the cqPageLucene index is checked to determine a match. This means that if the index contains 1 million cq:Page nodes, then 1 million records are checked to determine the result set.
After adding the cq:tags index rule
  • cq:tags Index Rule
    • /oak:index/cqPageLucene/indexRules/cq:Page/properties/cqTags
      @name=jcr:content/cq:tags
      @propertyIndex=true
  • Query Builder query
    • type=cq:Page
      property=jcr:content/cq:tags
      property.value=myTagNamespace:myTag
  • Query plan
    • [cq:Page] as [a] /* lucene:cqPageLucene(/oak:index/cqPageLucene) jcr:content/cq:tags:my:tag where [a].[jcr:content/cq:tags] = 'my:tag' */
The addition of the indexRule for jcr:content/cq:tags in the cqPageLucene index allows cq:tags data to be stored in an optimized way.
When a query with the jcr:content/cq:tags restriction is performed, the index can look up results by value. That means that if 100 cq:Page nodes have myTagNamespace:myTag as a value, only those 100 results are returned, and the other 999,000 are excluded from the restriction checks, improving performance by a factor of 10,000.

Post-Deployment



  • Monitor the error.log for travesal queries:
    • *WARN* org.apache.jackrabbit.oak.spi.query.Cursors$TraversingCursor Traversed ### nodes ... consider creating an index or changing the query
  • Visit the AEM Query Performance operations console and Explain slow queries looking for query plans that do not resolve query property restrictions to index property rules.


Detecting Large Result Set Queries


During Development


Set low threshholds for oak.queryLimitInMemory (eg. 10000) and oak.queryLimitReads (eg. 5000) and optimize the expensive query when hitting an UnsupportedOperationException saying “The query read more than x nodes..."

Post-Deployment


  • Monitor the logs for queries triggering large node traversal or large heap memory consumption :
    • *WARN* ... java.lang.UnsupportedOperationException: The query read or traversed more than 100000 nodes. To avoid affecting other tasks, processing was stopped.
    • Optimize the query to reduce the number of traversed nodes
  • Monitor the logs for queries triggering large heap memory consumption :
    • *WARN* ... java.lang.UnsupportedOperationException: The query read more than 500000 nodes in memory. To avoid running out of memory, processing was stopped
    • Optimize the query to reduce the heap memory consumption
For AEM 6.0 - 6.2 versions, you can tune the threshold for node traversal via JVM parameters in the AEM start script to prevent large queries from overloading the environment. The recommended values are :
  • -Doak.queryLimitInMemory=500000
  • -Doak.queryLimitReads=100000
In AEM 6.3, the above 2 parameters are preconfigured by default, and can be modified via the OSGi QueryEngineSettings.

Query Development Tools

Adobe Supported

Community Supported


  • Oak Index Definition Generator
    • Generate optimal Lucence Property Index from XPath or JCR-SQL2 query statements.
  • AEM Chrome Plug-in
    • Google Chrome web browser extension that exposes per-request log data, including executed queries and their query plans, in the browser's dev tools console.
    • Requires Sling Log Tracer 1.0.2+ to be installed and enabled on AEM.