Re: Plumbing of Fast CGI Streams

Jim Fulton (jim.fulton@digicool.com)
Thu, 04 Jul 1996 11:36:27 -0400

Message-Id: <31DBE4FB.58D0@digicool.com>
Date: Thu, 04 Jul 1996 11:36:27 -0400
From: Jim Fulton <jim.fulton@digicool.com>
To: Mark Brown <mbrown@OpenMarket.com>
Subject: Re: Plumbing of Fast CGI Streams 

Mark Brown wrote:
> 
> Excellent questions.

Thankyou. :-)

> 
>     > But if the buffer fills the
>     > application will block.  In the future we'd like to make the buffers
>     > expand on demand in order to prevent the application from blocking on
>     > output.
> 
>     I think that this is pretty critical, epecially if you aim to support
>     state-full single-threaded applications.
> 
> Can you explain what you mean by a state-full application?

State-full applications have state that is persistent across and may be changed
by multiple requests.
 
> I agree this is an important issue, but it seems important to
> *any* application that writes more than the server is prepared to
> buffer.

But non-state-full applications can easily be implemented with multiple server 
processes, so if one is blocked, other processes are available to handle new
requests. On the other hand, if a state-full application is distributed
over multiple processes, then some provision must be made for sharing
state information between processes.

>     We develop state-full applications and splitting these applications
>     into multiple processes is unattractive.  We can't, in general, limit
>     the amount of data these applications input or output, so if we want to
>     use Fast CGI, it appears we need a multi-threaded library.
> 
> Can you explain why you can't use session affinity to run your app
> as multiple processes?  In many cases session affinity works well.

Because session affinity only works if state only needs to be shared
among requests for the same client.  Our applications have state that
must be shared among requests for different clients.  

>     OK, suppose we decide to write a multi-threaded Fast CGI
>     application library.
> 
>     Now, we seem to have three choices:
> 
>     - Accept multiple requests over a single multiplexed connection,
>     - Accept multiple requests over multiple non-multiplexed connections, or
>     - Accept multiple requests over multiple multiplexed connections.
> 
> None of the *servers* supports connection multiplexing today,
> so your second option is the way to go today.

OK, thanks.
 
> As you point out, the server must implement very flexible buffering
> to make connection multiplexing work.  That's one reason why
> the Open Market server doesn't perform connection multiplexing today.

OK.

> Multiplexing doesn't make sense for Apache and NCSA, which handle
> only one request at a time per server process.

Right.
 
> We provided for connection multiplexing in the protocol from
> the outset because as performance standards rise (and they are rising
> quite steeply) multiplexing is bound to become necessary.

Fair enough.

Thanks.

Jim

-- 
Jim Fulton         Digital Creations
jim@digicool.com   540.371.6909