postgresql/doc/src/sgml/regress.sgml

419 lines
17 KiB
Plaintext

<!-- $PostgreSQL: pgsql/doc/src/sgml/regress.sgml,v 1.40 2004/05/21 05:07:55 tgl Exp $ -->
<chapter id="regress">
<title id="regress-title">Regression Tests</title>
<indexterm zone="regress">
<primary>regression tests</primary>
</indexterm>
<indexterm zone="regress">
<primary>test</primary>
</indexterm>
<para>
The regression tests are a comprehensive set of tests for the SQL
implementation in <productname>PostgreSQL</productname>. They test
standard SQL operations as well as the extended capabilities of
<productname>PostgreSQL</productname>. From
<productname>PostgreSQL</productname> 6.1 onward, the regression
tests are current for every official release.
</para>
<sect1 id="regress-run">
<title>Running the Tests</title>
<para>
The regression test can be run against an already installed and
running server, or using a temporary installation within the build
tree. Furthermore, there is a <quote>parallel</quote> and a
<quote>sequential</quote> mode for running the tests. The
sequential method runs each test script in turn, whereas the
parallel method starts up multiple server processes to run groups
of tests in parallel. Parallel testing gives confidence that
interprocess communication and locking are working correctly. For
historical reasons, the sequential test is usually run against an
existing installation and the parallel method against a temporary
installation, but there are no technical reasons for this.
</para>
<para>
To run the regression tests after building but before installation,
type
<screen>
gmake check
</screen>
in the top-level directory. (Or you can change to
<filename>src/test/regress</filename> and run the command there.)
This will first build several auxiliary files, such as
some sample
user-defined trigger functions, and then run the test driver
script. At the end you should see something like
<screen>
<computeroutput>
======================
All 93 tests passed.
======================
</computeroutput>
</screen>
or otherwise a note about which tests failed. See <xref
linkend="regress-evaluation"> below for more.
</para>
<para>
Because this test method runs a temporary server, it will not work
when you are the root user (since the server will not start as root).
If you already did the build as root, you do not have to start all
over. Instead, make the regression test directory writable by
some other user, log in as that user, and restart the tests.
For example
<screen>
<prompt>root# </prompt><userinput>chmod -R a+w src/test/regress</userinput>
<prompt>root# </prompt><userinput>chmod -R a+w contrib/spi</userinput>
<prompt>root# </prompt><userinput>su - joeuser</userinput>
<prompt>joeuser$ </prompt><userinput>cd <replaceable>top-level build directory</></userinput>
<prompt>joeuser$ </prompt><userinput>gmake check</userinput>
</screen>
(The only possible <quote>security risk</quote> here is that other
users might be able to alter the regression test results behind
your back. Use common sense when managing user permissions.)
</para>
<para>
Alternatively, run the tests after installation.
</para>
<para>
The parallel regression test starts quite a few processes under your
user ID. Presently, the maximum concurrency is twenty parallel test
scripts, which means sixty processes: there's a server process, a
<application>psql</>, and usually a shell parent process for the
<application>psql</> for each test script.
So if your system enforces a per-user limit on the number of processes,
make sure this limit is at least seventy-five or so, else you may get
random-seeming failures in the parallel test. If you are not in
a position to raise the limit, you can cut down the degree of parallelism
by setting the <literal>MAX_CONNECTIONS</> parameter. For example,
<screen>
gmake MAX_CONNECTIONS=10 check
</screen>
runs no more than ten tests concurrently.
</para>
<para>
On some systems, the default Bourne-compatible shell
(<filename>/bin/sh</filename>) gets confused when it has to manage
too many child processes in parallel. This may cause the parallel
test run to lock up or fail. In such cases, specify a different
Bourne-compatible shell on the command line, for example:
<screen>
gmake SHELL=/bin/ksh check
</screen>
If no non-broken shell is available, you may be able to work around the
problem by limiting the number of connections, as shown above.
</para>
<para>
To run the tests after installation<![%standalone-ignore;[ (see <xref linkend="installation">)]]>,
initialize a data area and start the
server, <![%standalone-ignore;[as explained in <xref linkend="runtime">, ]]> then type
<screen>
gmake installcheck
</screen>
The tests will expect to contact the server at the local host and the
default port number, unless directed otherwise by <envar>PGHOST</envar> and <envar>PGPORT</envar>
environment variables.
</para>
</sect1>
<sect1 id="regress-evaluation">
<title>Test Evaluation</title>
<para>
Some properly installed and fully functional
<productname>PostgreSQL</productname> installations can
<quote>fail</quote> some of these regression tests due to
platform-specific artifacts such as varying floating-point representation
and time zone support. The tests are currently evaluated using a simple
<command>diff</command> comparison against the outputs
generated on a reference system, so the results are sensitive to
small system differences. When a test is reported as
<quote>failed</quote>, always examine the differences between
expected and actual results; you may well find that the
differences are not significant. Nonetheless, we still strive to
maintain accurate reference files across all supported platforms,
so it can be expected that all tests pass.
</para>
<para>
The actual outputs of the regression tests are in files in the
<filename>src/test/regress/results</filename> directory. The test
script uses <command>diff</command> to compare each output
file against the reference outputs stored in the
<filename>src/test/regress/expected</filename> directory. Any
differences are saved for your inspection in
<filename>src/test/regress/regression.diffs</filename>. (Or you
can run <command>diff</command> yourself, if you prefer.)
</para>
<sect2>
<title>Error message differences</title>
<para>
Some of the regression tests involve intentional invalid input
values. Error messages can come from either the
<productname>PostgreSQL</productname> code or from the host
platform system routines. In the latter case, the messages may
vary between platforms, but should reflect similar
information. These differences in messages will result in a
<quote>failed</quote> regression test that can be validated by
inspection.
</para>
</sect2>
<sect2>
<title>Locale differences</title>
<para>
If you run the tests against an already-installed server that was
initialized with a collation-order locale other than C, then
there may be differences due to sort order and follow-up
failures. The regression test suite is set up to handle this
problem by providing alternative result files that together are
known to handle a large number of locales. For example, for the
<literal>char</literal> test, the expected file
<filename>char.out</filename> handles the <literal>C</> and <literal>POSIX</> locales,
and the file <filename>char_1.out</filename> handles many other
locales. The regression test driver will automatically pick the
best file to match against when checking for success and for
computing failure differences. (This means that the regression
tests cannot detect whether the results are appropriate for the
configured locale. The tests will simply pick the one result
file that works best.)
</para>
<para>
If for some reason the existing expected files do not cover some
locale, you can add a new file. The naming scheme is
<literal><replaceable>testname</>_<replaceable>digit</>.out</>.
The actual digit is not significant. Remember that the
regression test driver will consider all such files to be equally
valid test results. If the test results are platform-specific,
the technique described in <xref linkend="regress-platform">
should be used instead.
</para>
</sect2>
<sect2>
<title>Date and time differences</title>
<para>
A few of the queries in the <filename>horology</filename> test will
fail if you run the test on the day of a daylight-saving time
changeover, or the day after one. These queries expect that
the intervals between midnight yesterday, midnight today and
midnight tomorrow are exactly twenty-four hours --- which is wrong
if daylight-saving time went into or out of effect meanwhile.
</para>
<note>
<para>
Because USA daylight-saving time rules are used, this problem always
occurs on the first Sunday of April, the last Sunday of October,
and their following Mondays, regardless of when daylight-saving time
is in effect where you live. Also note that the problem appears or
disappears at midnight Pacific time (UTC-7 or UTC-8), not midnight
your local time. Thus the failure may appear late on Saturday or
persist through much of Tuesday, depending on where you live.
</para>
</note>
<para>
Most of the date and time results are dependent on the time zone
environment. The reference files are generated for time zone
<literal>PST8PDT</literal> (Berkeley, California), and there will be apparent
failures if the tests are not run with that time zone setting.
The regression test driver sets environment variable
<envar>PGTZ</envar> to <literal>PST8PDT</literal>, which normally
ensures proper results. However, your operating system must provide
support for the <literal>PST8PDT</literal> time zone, or the time zone-dependent
tests will fail. To verify that your machine does have this
support, type the following:
<screen>
env TZ=PST8PDT date
</screen>
The command above should have returned the current system time in
the <literal>PST8PDT</literal> time zone. If the <literal>PST8PDT</literal> time zone is not available,
then your system may have returned the time in UTC. If the
<literal>PST8PDT</literal> time zone is missing, you can set the time zone
rules explicitly:
<programlisting>
PGTZ='PST8PDT7,M04.01.0,M10.05.03'; export PGTZ
</programlisting>
</para>
<para>
There appear to be some systems that do not accept the
recommended syntax for explicitly setting the local time zone
rules; you may need to use a different <envar>PGTZ</envar>
setting on such machines.
</para>
<para>
Some systems using older time-zone libraries fail to apply
daylight-saving corrections to dates before 1970, causing
pre-1970 <acronym>PDT</acronym> times to be displayed in <acronym>PST</acronym> instead. This will
result in localized differences in the test results.
</para>
</sect2>
<sect2>
<title>Floating-point differences</title>
<para>
Some of the tests involve computing 64-bit floating-point numbers (<type>double
precision</type>) from table columns. Differences in
results involving mathematical functions of <type>double
precision</type> columns have been observed. The <literal>float8</> and
<literal>geometry</> tests are particularly prone to small differences
across platforms, or even with different compiler optimization options.
Human eyeball comparison is needed to determine the real
significance of these differences which are usually 10 places to
the right of the decimal point.
</para>
<para>
Some systems display minus zero as <literal>-0</>, while others
just show <literal>0</>.
</para>
<para>
Some systems signal errors from <function>pow()</function> and
<function>exp()</function> differently from the mechanism
expected by the current <productname>PostgreSQL</productname>
code.
</para>
</sect2>
<sect2>
<title>Row ordering differences</title>
<para>
You might see differences in which the same rows are output in a
different order than what appears in the expected file. In most cases
this is not, strictly speaking, a bug. Most of the regression test
scripts are not so pedantic as to use an <literal>ORDER BY</> for every single
<literal>SELECT</>, and so their result row orderings are not well-defined
according to the letter of the SQL specification. In practice, since we are
looking at the same queries being executed on the same data by the same
software, we usually get the same result ordering on all platforms, and
so the lack of <literal>ORDER BY</> isn't a problem. Some queries do exhibit
cross-platform ordering differences, however. (Ordering differences
can also be triggered by non-C locale settings.)
</para>
<para>
Therefore, if you see an ordering difference, it's not something to
worry about, unless the query does have an <literal>ORDER BY</> that your result
is violating. But please report it anyway, so that we can add an
<literal>ORDER BY</> to that particular query and thereby eliminate the bogus
<quote>failure</quote> in future releases.
</para>
<para>
You might wonder why we don't order all the regression test queries explicitly to
get rid of this issue once and for all. The reason is that that would
make the regression tests less useful, not more, since they'd tend
to exercise query plan types that produce ordered results to the
exclusion of those that don't.
</para>
</sect2>
<sect2>
<title>The <quote>random</quote> test</title>
<para>
The <literal>random</literal> test script is intended to produce
random results. In rare cases, this causes the random regression
test to fail. Typing
<programlisting>
diff results/random.out expected/random.out
</programlisting>
should produce only one or a few lines of differences. You need
not worry unless the random test repeatedly fails.
</para>
</sect2>
</sect1>
<!-- We might want to move the following section into the developer's guide. -->
<sect1 id="regress-platform">
<title>Platform-specific comparison files</title>
<para>
Since some of the tests inherently produce platform-specific
results, we have provided a way to supply platform-specific result
comparison files. Frequently, the same variation applies to
multiple platforms; rather than supplying a separate comparison
file for every platform, there is a mapping file that defines
which comparison file to use. So, to eliminate bogus test
<quote>failures</quote> for a particular platform, you must choose
or make a variant result file, and then add a line to the mapping
file, which is <filename>src/test/regress/resultmap</filename>.
</para>
<para>
Each line in the mapping file is of the form
<synopsis>
testname/platformpattern=comparisonfilename
</synopsis>
The test name is just the name of the particular regression test
module. The platform pattern is a pattern in the style of the Unix
tool <command>expr</> (that is, a regular expression with an implicit
<literal>^</literal> anchor
at the start). It is matched against the platform name as printed
by <command>config.guess</command> followed by
<literal>:gcc</literal> or <literal>:cc</literal>, depending on
whether you use the GNU compiler or the system's native compiler
(on systems where there is a difference). The comparison file
name is the name of the substitute result comparison file.
</para>
<para>
For example: some systems interpret very small floating-point values
as zero, rather than reporting an underflow error. This causes a
few differences in the <filename>float8</> regression test.
Therefore, we provide a variant comparison file,
<filename>float8-small-is-zero.out</filename>, which includes
the results to be expected on these systems. To silence the bogus
<quote>failure</quote> message on <systemitem>OpenBSD</systemitem>
platforms, <filename>resultmap</filename> includes
<programlisting>
float8/i.86-.*-openbsd=float8-small-is-zero
</programlisting>
which will trigger on any machine for which the output of
<command>config.guess</command> matches <literal>i.86-.*-openbsd</literal>.
Other lines
in <filename>resultmap</> select the variant comparison file for other
platforms where it's appropriate.
</para>
</sect1>
</chapter>
<!-- Keep this comment at the end of the file
Local variables:
mode:sgml
sgml-omittag:nil
sgml-shorttag:t
sgml-minimize-attributes:nil
sgml-always-quote-attributes:t
sgml-indent-step:1
sgml-indent-data:t
sgml-parent-document:nil
sgml-default-dtd-file:"./reference.ced"
sgml-exposed-tags:nil
sgml-local-catalogs:("/usr/lib/sgml/catalog")
sgml-local-ecat-files:nil
End:
-->