We are all aware of the problem with deadlocks of the Faroo p2p client. This is mentioned in many (open ) posts in this forum.
I observed that in many cases (if not all?) the crawler queue at restart of the client is the same as at the last start. In other words: the queue is not updated during working time of the client. It seems to be updated only at shutdown time. Restarting a previously crashed client results in repeating previously crawled, indexed and distributed web page content. In my opinion, this is a waste of time and ressources. The only benefit of this behaviour is that indexed words are distributed to other peers than at the first time broadening the overall database.
However, I would like to see interim updates of the crawler queue so that a restart after a crash does not imply a repetition of all pages.