On Wed, 20 Jan 1999 kugler at us.ibm.com wrote:
>=20
>=20
> Scott Lawrence wrote:
> Original Article: http://www.egroups.com/list/http-wg/?start=3D8491> > kugler at us.ibm.com wrote:
> >
> > > The IPP WG would really like clarification on this point: Is the
> intent of
> > > the HTTP/1.1 spec to say that an HTTP/1.1 server MAY reject any reque=
st
> > > without a defined Content-Length? This would imply that a conformant
> > > HTTP/1.1 server MAY reject any request with the "chunked"
> transfer-coding.
> >
> > First, I think that the note Harry Lewis sent titled "IPP> Chunking
> > Explanation" [1] sums it up pretty well. An HTTP server certainly has
> the
> > option of using the "Length Required" code for whatever reason it wants
> > to.
> If this is the correct interpretation, then I was misled for a long time =
by
> the paragraph in section 4.4, "Message Length", that says "All HTTP/1.1
> applications that receive entities MUST accept the =93chunked=94
> transfer-coding (section 3.6), thus allowing this mechanism to be used fo=
r
> messages when the message length cannot be determined in advance. " I
> think it would be very helpful to have a note or warning added to that
> paragraph; perhaps:
>=20
> All HTTP/1.1 applications that receive entities MUST accept the "chunked"
> transfer-coding (section 3.6), thus allowing this mechanism to be used fo=
r
> messages when the message length cannot be determined in advance. Note:
> this does NOT mean that servers must accept HTTP/1.1 requests containing =
a
> message-body with the "chunked" transfer-coding.
>=20
> > My own judgement would be that a printer design that did not allow for
> > very large inputs of indeterminate length would be a poor one, and as a
> > result I would not choose an HTTP layer implementation that restricted =
me
> > to CGI.
> >
> Agreed.
>=20
I believe that the history of this problem has to do with the
limitations of CGI/1.1 and the fact that it has been essentially
frozen. For a chunked message-body to go to a CGI program it must be
unchunked. CGI has not advanced with HTTP/1.1 so the CGI spec will
only accept unchunked data of a known length. This means that the
only possibility is for the server to unchunk, buffer and calculate
the length of the data.
Some server implementations (most notably Apache) decided they were
unwilling to do this -- presumably for efficiency reasons. Hence
while they are required to "accept" chunked message bodies they are
not required to buffer them so they can be used by CGI. This may make
the notion of "accept" somewhat silly, but it is also silly to say
data of arbitrary length must be accepted.
This is unfortunate, because CGI, while not very efficient, is very
flexible. It provides a mechanism for existing installed servers to
work with new applications. It may be easy to rewrite a server to
handle a new application or even create a new module and recompile,
but those changes are not likely to affect the large installed base of
servers.
I can sympathize with the Apache developers view that unchunking and
buffering message bodies on a site which gets millions of hits per day
could be very problematic. On the other hand applications, like
printing will normally occur on a much smaller scale for which CGI is
perfectly appropriate. It is unfortunate that there seems to be no
way out of this dilemma.
Some servers do support unchunking and buffering for CGI, but the
large majority do not. There is no way to change this fact
retroactively. The bottom line is this is an implementation issue,
not a protocol issue. If implementors choose not to support
functionality you need, you may just be stuck.
John Franks
john at math.nwu.edu