Joel.Saunders at rfsuny.org
Tue Oct 13 16:06:47 EDT 2009
I can break it down into 30 or more sites. If I do allsites though with
30 sites, it would do them one at a time, correct?
Timeout and verbose options would still be nice.
From: Joe Orton [mailto:joe at manyfish.co.uk]
Sent: Tuesday, October 13, 2009 4:03 PM
To: Saunders, Joel
Cc: sitecopy at lists.manyfish.co.uk
Subject: Re: Sitecopy questions
On Tue, Oct 13, 2009 at 03:25:29PM -0400, Saunders, Joel wrote:
> I ran the debug but with a list option vs. the update option... It
> still sat there for 20 min and in the end put nothing in the log.
> Here's the stats that you were asking for:
> #files : > 79,000 (some going back to 1/04)
> Size : over 9.4 gb
> I imagine that this is the real issue.... Need to discuss purging with
> the users, I guess.
> Would like a --verbose option to see what it's doing at all times,
> though. Also, if I can't reduce the files right away, it would be
> good to be able to up the timeout in some fashion.
Um, yeah, that's quite big. sitecopy is not designed to manage a site
that large, it has some exponential-time list scanning routines which
might be taking up your 20 minutes, if not disk access.
I presume you are not using "state checksum" on this site? That would
have to read all 9.4 GB of content every sitecopy invocation too.
The only other thing I could recommend would be to try partitioning the
site across several rcfile entries so sitecopy has less to manage per
More information about the sitecopy