Re: fastCGI, memory leaks etc

Nigel Metheringham (Nigel.Metheringham@theplanet.net)
Mon, 19 Aug 1996 13:42:41 +0100

Message-Id: <m0usTfG-000BGBC@dingo.theplanet.co.uk>
To: Michael Smith <mjs@cursci.co.uk>
From: Nigel Metheringham <Nigel.Metheringham@theplanet.net>
Subject: Re: fastCGI, memory leaks etc 
In-Reply-To: Your message of "Mon, 19 Aug 1996 12:43:07 BST."
             <3218534B.1CC068FC@cursci.co.uk> 
Date: Mon, 19 Aug 1996 13:42:41 +0100

} I'm surprised this hasn't been raised before but I would like to be able
} to have some directive for apache-fastCGI processes similar to
} MaxRequestsPerChild, or at least some other way to achieve this effect.
} 
} I have tried to do this in the fastCGI process itself by saying
} while((FCGI::accept()>0) && (count<100)) which achieves the desired
} effect but the 100th access results in a server error.  Is there a way
} to exit the fastCGI-loop gracefully?  It is important for me to be able
} to restart the process every now and then, so if anybody can be of any
} help I'd be eternally grateful (or till the end of the week, which ever
} is sooner).

Exit your process explicitly after n connections - something like 
this at the end of the loop

	if (count++ > 100)
		exit(0);

The problem with your code is first you accepted the connection, then 
you dumped out of the processing loop.

	Nigel.
-- 
[ Nigel.Metheringham@theplanet.net   - Unix Applications Engineer ]
[ *Views expressed here are personal and not supported by PLAnet* ]
[ PLAnet Online : The White House          Tel : +44 113 251 6012 ]
[ Melbourne Street, Leeds LS2 7PS UK.      Fax : +44 113 2345656  ]