Hi everybody:
I am running a cleanup process in my repository to delete old content (Alfresco Content Community 6.2 - Search services 2.0.0). This process uses Alfresco REST API to delete previously identified nodes. After succesfully deleting content, 204 response as expected, Solr logs errors but I don't know why. Following is the error messages (headers):
11/16/2022, 7:49:02 AM ERROR SolrInformationServer
Unable to get nodes metadata from repository. See the stacktrace below for further details.
org.alfresco.error.AlfrescoRuntimeException: 10163133 api/solr/metadata return status:504
at org.alfresco.solr.client.SOLRAPIClient.callRepository(SOLRAPIClient.java:1596)
11/16/2022, 7:49:02 AM ERROR SolrInformationServer
Bulk indexing failed; do one node at a time. See the stacktrace below for further details.
java.lang.Exception: Error loading node metadata from repository for bulk delete.
Can you help me with this? What do I have to check? Am I doing wrong deleting this way?
Thanks in advance for your help
Solved! Go to Solution.
Hi:
In my experience deleting a high number of docs, I obtained cache problems when doing some crawling strategy for deleting under a path of the repository. In my case, with a recursive custom webscript. And when these problems appears, the recursive task were not able to finish...
FInally, it was most efective some kind of scheduler over a limited batch of the previous webscript.
I don't see the relation with SOLR errors but, in a massive case for deletion, it would be a good idea even to stop indexing during the deletion.
Regards.
--C.
Additional information from alfresco.log
2022-11-16 10:58:10,052 WARN [org.alfresco.repo.cache.TransactionalCache.org.alfresco.cache.node.nodesTransactionalCache] [http-nio-8080-exec-9] Transactional update cache 'org.alfresco.cache.node.nodesTransactionalCache' is full (125000).
This message is logged at the same time in Alfresco
Hi:
In my experience deleting a high number of docs, I obtained cache problems when doing some crawling strategy for deleting under a path of the repository. In my case, with a recursive custom webscript. And when these problems appears, the recursive task were not able to finish...
FInally, it was most efective some kind of scheduler over a limited batch of the previous webscript.
I don't see the relation with SOLR errors but, in a massive case for deletion, it would be a good idea even to stop indexing during the deletion.
Regards.
--C.
Thanks Cesar for your response. I am sistematically deleting folders using REST API and in most cases with 204 response (Succesful) but at the same time this cache warning is being logged. I am adjusting the frequency of deletion (for example 2 minutes or more between deletions) but cahe warning is still being logged.
Is this critical? Thanks a lot for your guidance
Those warnings are just telling you that the default value provided for in-memory caches has been exceeded. Caches greatly improve repository performance but they use Java heap memory, alfresco comes with some default values. This kind of warnings are expected when there is plenty of transactions going on.
https://docs.alfresco.com/content-services/latest/config/repository/#configure-the-repository-cache
You can increase it but be mindful that it will take more heap memory. Or observe the logs when you are done deleting the folders and increase the value if it still comes back that indicates there are excess activity on the repo.
We are searching for freelancers to develop a web platform with alfresco search function connected to aspnet. if interested write to supporto@lansystems.it
Ask for and offer help to other Alfresco Content Services Users and members of the Alfresco team.
Related links:
By using this site, you are agreeing to allow us to collect and use cookies as outlined in Alfresco’s Cookie Statement and Terms of Use (and you have a legitimate interest in Alfresco and our products, authorizing us to contact you in such methods). If you are not ok with these terms, please do not use this website.