Thanks for raising these questions. It's time we started answering
them, too, IMHO.
I fully understand the desire of the printer industry to address the
global potential for Internet-based printing, particularly the "pay
for print" markets. I also understand the desire for a general--but
often oversimplified--feature to replace certain fax applications by
Internet-based printing.
Where I get a bit concerned is that it seems as if intranet printing
is getting shoved into the backseat (or even the trunk) in all of this.
Thinking back to the IETF BOF in San Jose last December, perhaps this
concern was also shared by the many folks who simply wanted to extend
and better standardize the LPD protocol (RFC 1179).
To be sure, an HTTP-based printing protocol could be used within the
general intranet environment. However, it certainly is not necessarily
the more optimum approach with regard to resources and performance.
Using a non-HTTP printing protocol should not in any way preclude
the use of HTTP and other typical Web technologies to improve the
overall printing environment.
As we dive deeper and deeper into the HTTP-compatible IPP protocol,
things are getting murkier and murkier. The complexities we have
already stumbled upon barely justify the effort given the level of
capabilities defined, not to mention the complexities we have yet
to encounter.
Our company has had the opportunity to use a clean and *simple*
network printing protocol to effect feature-rich, high-performance
in intranet environments. That protocol provides virtually all
features and capabilities currently defined in IPP...and yet has
been proven in the field for some 10 years now.
I am curious how many people out there would be willing to consider
the definition of a leaner, meaner network printing protocol for use
in intranet environments. Such a protocol would not necessarily be
targeted as a replacement for the current IPP work; rather, it would
be conducted in parallel to that work, and leverage the basic model
and operational aspects defined thus far.
Any interest out there?
...jay
PS: Has anyone ever asked Netscape how much effort would be involved
in adding a new standard protocol to their browser technology?
----------------------------------------------------------------------
-- JK Martin | Email: jkm@underscore.com --
-- Underscore, Inc. | Voice: (603) 889-7000 --
-- 41C Sagamore Park Road | Fax: (603) 889-2699 --
-- Hudson, NH 03051-4915 | Web: http://www.underscore.com --
----------------------------------------------------------------------
----- Begin Included Message -----
X-URI: http://www.cs.utk.edu/~moore/
From: Keith Moore <moore@cs.utk.edu>
To: Robert.Herriot@eng.sun.com (Robert Herriot)
cc: ipp@pwg.org, http-wg@cuckoo.hpl.hp.com, moore@cs.utk.edu
Subject: Re: IPP>PRO - http comments
Date: Thu, 01 May 1997 21:43:28 -0400
> We seem to have a wide variety of beliefs about how Content-Length
> and boundary strings interact when a part of a multipart/* contains a
> Content-Length. I prefer the behavior described by the last
> email in this series. But this exchange shows a lack of consensus.
> Can we reach consensus, or this an email versus HTTP difference in
> behavior?
This whole exchange is a good illustration on why I'm opposed to the
reuse of HTTP/1.1 for non-web applications, including a printing
protocol. HTTP/1.1 is a real mess. MIME is also a real mess. They
are messy because they needed to be backward compatible with a widely
deployed installed base, while adding new features not anticipated for
in the original protocol design. Any new printing protocol will have
its own baggage also. Should it then inherit additional baggage from
HTTP and MIME?
Being able to print from a web browser would be a Good Thing, but I
seriously question whether it's worthwhile to standardize this
interface. Printer shops are going to need to add their own
front-ends anyway, for queueing and to add additional services not
supported directly by a printer.
It would be far better to define a new protocol which doesn't inherit
the baggage of HTTP/1.1. The "baggage" isn't the amount of code
required to implement the protocol, it's the difficulty in dealing
with lots of protocol variants -- caused by multiple earlier versions
with ill-defined specifications, as well as future changes to HTTP --
and the resulting lack of interoperability and increased
testing/support costs.
(not to mention the overhead of having arguments about whether
Content-Length is valid within a multipart content which is carried
over HTTP.)
It seems like the lpr problem all over again, only worse.
(Except that the one thing "right" about the lpr protocol design is that it's
framing protocol is almost foolproof -- a byte count followed by that many
bytes.)
Do you really want to burn HTTP/1.1 and MIME support into printer
ROMs? Why not use something which is simpler and easier to get right?
Keith
p.s. Use of Content-Length *within* a multipart (as opposed to the
top level) seems like a huge design botch precisely because it begs
the question of what happens if Content-Length is wrong (as it often
is). If there's really a good reason for using it within a multipart,
I'd like to know about it.
----- End Included Message -----