Re: FastCGI architectural questions

Bill Snapper (snapper@gigapacket.com)
Thu, 14 Aug 1997 13:54:54 -0400

Message-Id: <3.0.3.32.19970814135454.00975a60@shultz.gigapacket.com>
Date: Thu, 14 Aug 1997 13:54:54 -0400
To: Sonya Rikhtverchik <rikhtver@OpenMarket.com>
From: Bill Snapper <snapper@gigapacket.com>
Subject: Re: FastCGI architectural questions
In-Reply-To: <199708141433.KAA06053@u4-138.openmarket.com>

>I have read FastCGI white paper, and neither I nor AltaVista could find 
>the mentioned "FastCGI Protocol Specification". I honesly would have read 
>it. Where is it? (this was Question#0)

Try "http://www.fastcgi.com/kit/doc/fcgi-spec.html"

>Q1. In the mailing list archive I have read that a long query over a slow 
>connection blocks the fcgi process. Why is it like that? Why does the Web 
>server start passing fcgi data to the process while it is still on the 
>line, and why there is no buffering. Of course it is speed vs. safe 
>continous/nonblocked processing. But the delay casused by buffering is 
>not more than the time needed to pass data from the Web server to the 
>fcgi application which are usually going to be on the same machine (90% 
>of installations) or on the same lan (other 9% of installations).
>

The goal was to stream the data from the request to the FastCGI application
as would be done for a CGI script.  The same problem exists for CGI or
native server plugins for that matter that need to read data from a web
client.  The web server will not start the request to the FastCGI application
until the entire http header has been read in.  It is not the responsibility
of a Web Server to read in any additional content that a client sent.  This
is done on behalf of the script or application.  Some servers will start
reading in data from the client based on the content-length header but if
they're properly coded they won't read more than can be sent to the intended
target (the FastCGI app in this case) as that would bloat the server.  It's
also a good way to do flow control.  The server, if properly coded, is not
blocked from other processing while reading data from a socket and sending it
to a FastCGI or CGI script.  Some do process this way but not all.


>  Anyway: is it an implementation issue or casued by the architecture?

A misunderstanding of the technology.

>
>
>
>Q2. I would much rather be happy with a Web server and an fgci dispatcher 
>server separately. As far as I could understand the Web server is 
>responsible to start fcgi processes at startup, restart them if they are 
>stopped. And of course Web server is responsible to send http request 
>data to the fcgi process.
>
>I would separate the roles. Web server communicates with the fcgi 
>dispatcher, which is on the same machine as the web server OR runs as a 
>demon on different machine or machines.
>
>The Web server is responsible for the communication, and the fcgid 
>process(es) communicate with the fastcgi processes. Now the fastcgi 
>processes can be on the same machine as the fcgid, and therefore the api 
>for the client can also be simpler.
>


1) There is nothing in the FastCGI specification which states that a web
   server must manage applications.  There's no reason you couldn't have
   a separate process manager.  This design and implementation is up to
   the developer.  Some servers have integrated a FastCGI application
   manager with the Web Server and that works fine also.  These are intended
   for managing local processes.

2) One of the main benefits of FastCGI is that you the FastCGI process can
   be local or remote to the machine the Web Server is running on.  Remote
   processes give you several benefits:

   o ability to distribute the web processing
   o allows you to put a FastCGI process on a legacy system where there
     is either no web server or you don't want to put one.  You can now
     present legacy data on the web quite easily.
   o allows you to run a Unix web server and serve up data from an NT or
     VMS based FastCGI application for example.
   o etc...


>Q3. As far as I can understand session affinity and authorizer scripts 
>they should have been work together, session affinity supported by the 
>authorizer.
>
>Here is my opinion what an authorizer script should do:
>
>- - Control access based on whatever it can be based on.
>- - Control session asking the Web server (or the fcgi server if the 
>  scripts run on a different machie) to start a new responder process or 
>  telling the Web server or fcgi server which already running process to 
>  use for serving the hit.
>
>This way FastCGI applications can run in separate processes for each user 
>session, but not for each hit. Authorizer scripts can do load balancing 
>between fastcgi processes, and even fast cgi processes running on 
>different machines. This would also lead to scalability without 
>multithread cgi scripts (which can not even be done at all in Perl being 
>the most popular scripting tool).
>


FastCGI allows you to does this.  The authorizers can be used in this way. 
You could also distribute processing with process affinity without requiring
an authorizer.

>Your opinion?

Read the white paper and specifications.  Then get a FastCGI aware server
and experiment.  I believe FastCGI will do what you want it to.