AnsweredAssumed Answered

Full reindex out of memory and possible solution...

Question asked by dellui on Nov 22, 2010
Latest reply on Jun 8, 2011 by alcibiade
Hi all.
I have to make a full reindex of ~8M of transaction.
I have Alfresco 3.0 labs community edition with a 64-bit HW with 32 bit OS and 32 bit JDK (1.6) with a 4 nodes cluster over GFS that share a 1TB SAN disk.

Whenever I try to start full reindex, at 50%~ reindex fail for Out of memory. I have tried many many JAVA_OPTS and properties in custom-repository.properties, but nothing.

My first question is if exist some technique to make "incremental" the full reindex…I am searching for but I can't find it.

To solve, I implemented my own solution, but I do not know if it is correct :
I saw that the transactions are well distributed among the various nodes, so I have modified the FullIndexRecoveryComponent to rescan only one server at the time (adding other three to excludeServerIds in the HibernateNodeDaoServiceImpl class)

In this way I avoid the out of memory problem because the full reindex is splitted into 4 step.

Madness is what I did ? :)
The indexes are merged/updated or destroyed ?

Thank's a lot.
Gigi

Outcomes