AnsweredAssumed Answered

Erreur solr java.io.IOException: Mark invalid

Question asked by afernandez on Jan 6, 2014
Bonjour,

J'utilise Alfresco suivant :

Alfresco Community v4.0.0
(4003) schema 5025
Spring Surf and Spring WebScripts - v1.0.0
(Release 958)

Depuis quelques temps j'ai les erreurs suivantes dans les logs :


org.alfresco.solr.tracker.CoreTracker trackRepository
SEVERE: Tracking failed
java.io.IOException: Mark invalid
        at java.io.BufferedReader.reset(BufferedReader.java:485)
        at org.apache.lucene.analysis.CharReader.reset(CharReader.java:63)
        at org.apache.solr.analysis.HTMLStripCharFilter.restoreState(HTMLStripCharFilter.java:172)
        at org.apache.solr.analysis.HTMLStripCharFilter.read(HTMLStripCharFilter.java:734)
        at org.apache.solr.analysis.HTMLStripCharFilter.read(HTMLStripCharFilter.java:748)
        at java.io.Reader.read(Reader.java:123)
        at org.apache.lucene.analysis.CharTokenizer.incrementToken(CharTokenizer.java:77)
        at org.apache.solr.analysis.PatternReplaceFilter.incrementToken(PatternReplaceFilter.java:74)
        at org.apache.lucene.analysis.LengthFilter.incrementToken(LengthFilter.java:54)
        at org.apache.lucene.analysis.LowerCaseFilter.incrementToken(LowerCaseFilter.java:38)
        at org.apache.solr.analysis.WordDelimiterFilter.incrementToken(WordDelimiterFilter.java:337)
        at org.apache.solr.analysis.SnowballPorterFilter.incrementToken(SnowballPorterFilterFactory.java:116)
        at org.apache.solr.analysis.WordDelimiterFilter.incrementToken(WordDelimiterFilter.java:337)
        at org.apache.lucene.analysis.LowerCaseFilter.incrementToken(LowerCaseFilter.java:38)
        at org.apache.lucene.analysis.StopFilter.incrementToken(StopFilter.java:225)
        at org.apache.lucene.analysis.ASCIIFoldingFilter.incrementToken(ASCIIFoldingFilter.java:71)
        at org.apache.lucene.analysis.TokenStream.next(TokenStream.java:406)
        at org.apache.solr.analysis.BufferedTokenStream.read(BufferedTokenStream.java:97)
        at org.apache.solr.analysis.RemoveDuplicatesTokenFilter.process(RemoveDuplicatesTokenFilter.java:50)
        at org.apache.solr.analysis.BufferedTokenStream.next(BufferedTokenStream.java:85)
        at org.alfresco.repo.search.impl.lucene.analysis.MLTokenDuplicator.buildIterator(MLTokenDuplicator.java:116)
        at org.alfresco.repo.search.impl.lucene.analysis.MLTokenDuplicator.next(MLTokenDuplicator.java:95)
        at org.alfresco.repo.search.impl.lucene.analysis.MLTokenDuplicator.next(MLTokenDuplicator.java:109)
        at org.apache.lucene.analysis.TokenStream.incrementToken(TokenStream.java:321)
        at org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:189)
        at org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(DocFieldProcessorPerThread.java:244)
        at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:828)
        at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:809)
        at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:2683)
        at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:2655)
        at org.alfresco.solr.AlfrescoUpdateHandler.addDoc(AlfrescoUpdateHandler.java:323)
        at org.alfresco.solr.tracker.CoreTracker.indexNode(CoreTracker.java:2051)
        at org.alfresco.solr.tracker.CoreTracker.trackRepository(CoreTracker.java:1410)
        at org.alfresco.solr.tracker.CoreTracker.updateIndex(CoreTracker.java:491)
        at org.alfresco.solr.tracker.CoreTrackerJob.execute(CoreTrackerJob.java:45)
        at org.quartz.core.JobRunShell.run(JobRunShell.java:216)
        at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:563)


Le système fonctionne, mais les documents qui sont versés depuis que l’erreur se présente ne sont plus pris en compte dans les recherches.

J'ai trouvé via google des problèmes similaires dans d'autres utilisations de solr, par exemple :
https://issues.apache.org/jira/browse/SOLR-1283

Quelqu'un aurait une idée de la cause de cette erreur et une piste pour la corriger ?

Outcomes