Re: Plumbing of Fast CGI Streams

Jim Fulton (
Thu, 04 Jul 1996 10:07:53 -0400

Message-Id: <>
Date: Thu, 04 Jul 1996 10:07:53 -0400
From: Jim Fulton <>
Subject: Re: Plumbing of Fast CGI Streams 

Mark Brown wrote:
> The server has a reasonably large per-connection buffer (16 K bytes in
> the case of the Open Market Secure WebServer) so the app would have to
> write quite a bit in order to block. 

We do not consider 16 K to be "quite a bit" of data.

> But if the buffer fills the
> application will block.  In the future we'd like to make the buffers
> expand on demand in order to prevent the application from blocking on
> output.

I think that this is pretty critical, epecially if you aim to support
state-full single-threaded applications.
> There typically isn't much input so applicatin blocking on input is
> quite rare.  A large POST could do it though.  The server does *not*
> pre-read CONTENT_LENGTH bytes before connecting to the app.
> An application could accept multiple connections in order to avoid
> blocking, but not using the current app lib.

We develop state-full applications and splitting these applications
into multiple processes is unattractive.  We can't, in general, limit
the amount of data these applications input or output, so if we want to
use Fast CGI, it appears we need a multi-threaded library.

OK, suppose we decide to write a multi-threaded Fast CGI application library.

Now, we seem to have three choices:

  - Accept multiple requests over a single multiplexed connection,
  - Accept multiple requests over multiple non-multiplexed connections, or
  - Accept multiple requests over multiple multiplexed connections.

If we use multiplexed connections, the application library will place
output packets (e.g. for stdout or stderr) for various requests on a 
multiplexed connection in some order.  At the server side, the server 
will extract output packets from the connection in the order that they 
were put on the connection.  What happens if the server's output buffer 
for a request is full, say because more than 16k of data has been output 
to a slow client connection?  

I assume that the output packet is left on the connection until the server has 
room for it in it's output buffer. Or, at least, no additional packets will
be read from the connection until the data for the packet has been moved to
the output buffer. If this is the case, then output packets of other requests 
will be blocked, and therefore other requests on the same multiplexed connection 
are blocked.  Right?

Alternatively, the server could squirrel away output packets for requests with
with full output buffers in some special overflow structure, however, I assume
that the server does not do this, since, if it did, it could do the same thing 
for single-threaded applications and single-threaded applications would not block.  
Since single-threaded applications can block, then the server must not provide 
any overflow storage for multiplexed connections.  Have I got this right?

The gist of all of this is that, if a multi-threaded application is going to generate 
output that can exceed the size of the server's output buffer, then the application 
should not use multiplexed connections.  Of course, the Fast CGI specification doesn't 
say anything about minimum server output buffer sizes, so portable applications may 
have to make conservative assumptions, or provide run-time configuration options.  
This makes use of multiplexed connections look unattractive. Are there any advantages 
in using multiplexed connections over simply using multiple non-multiplexed connections?


Jim Fulton         Digital Creations   540.371.6909