It is difficult to estimate the times, because it is
100% dependent on the consistency of the
implementations. I do know that a reasonably
well-implemented printer will take about four hours,
but we have the added complexity of manually
generating printer transactions in a controlled
manner. If this gets fouled up or someone makes a
mistake on the input generation, we will have to start
over.
For all the items listed in #6 below, InterWorking
Labs has tests. We are modifying the output of the
tests so that only the data containing the name of the
object and its current value is included in an output
file. That way we get only the test results reflecting
a list of all objects and their values for each printer.
I would also like to clarify that using these tests and
generating this test plan is strictly a volunteer effort
on the part of InterWorking Labs. I am not "in charge"
of the test event, and I am not controlling it or
running it. Obviously if there are problems with the
Test Plan, I need to hear about them so we can get
that right to meet the goals (see earlier test plan draft).
I am acting as Working Group Co-Chair and making my volunteer
contribution. Making the test event successful is a
group effort by the participants. To paraphrase
former U.S. President John Kennedy, "Ask not what the
Working Group Co-Chair can do for you; ask what you can
do to advance RFC1759 to Draft Standard." :-)
Printer MIB Test Plan (revised 011597)
Earlier releases of the Test Plan defined the goal, the RFC references, the
reasoning, etc. Following is a detailed summary of what we will test, how
we will test, and the order of the testing.
We are testing that every single one of the 179 MIB objects are implemented in
at least two printers, and that the implementations are consistent among all
the printers. This is the IETF requirement to advance to Draft Standard.
We are not testing conformance, compliance, boundary conditions (except indirectly),
MIB II and its instrumentation, performance, data representation (except for
inconsistencies among implementations), etc.
1. Set up printer and Win95 laptop (you bring both).
2. Check for link state.
3. Use Dr. Watson (see www.cavebear.com) to do ARP and ping to each printer
and to check all IP adresses, subnet masks, etc.
4. Do a MIB walk on each product and count the returned objects.
5. Generate controlled (in a pre-defined script) printer specific
transactions, including jobs in various formats and various sizes
(known page counts), postscript files, raw ascii, and specific applications
(e.g. Word, Navigator, Quicken).
6. Start the test software application for the following areas. After
completing the test for each line item, check to make sure the values of
the individual objects are contained in the specified output file.
Host Resources Storage Group
Host Resources Device Table
Host Resources Processor Table
Host Resources Network Table
Host Resources Printer Table
Host Resources Disk Storage Table
Host Resources Partition Table
Host Resources File System Table
Responsible Party Group
Cover Table
General Printer Group
Input Group
Extended Input Group
Input Media Group
Output Group
Extended Output Group
Output Dimensions Group
Output Features Group
Marker Group
Marker Supply Table
Marker Colorant Table
Media Path Group
Channel Table
Interpreter Table
Console Group
Display Buffer Table
Console Light Table
Trap Test: coverOpen
Trap Test: noPaper
Trap Test: noToner
Trap Test: paperjam
6. Compare the values of the objects in each area returned from each
printer. Make a list of all variations. Decide on how the spec should
be modified.