This module determines how doctest results are reported to the user.
It also computes the exit status in the error_status attribute of :class:DocTestReporter. This is a bitwise OR of the following bits:
AUTHORS:
Bases: sage.structure.sage_object.SageObject
This class reports to the users on the results of doctests.
Print out the postcript that summarizes the doctests that were run.
EXAMPLES:
First we have to set up a bunch of stuff:
sage: from sage.doctest.reporting import DocTestReporter
sage: from sage.doctest.control import DocTestController, DocTestDefaults
sage: from sage.doctest.sources import FileDocTestSource, DictAsObject
sage: from sage.doctest.forker import SageDocTestRunner
sage: from sage.doctest.parsing import SageOutputChecker
sage: from sage.doctest.util import Timer
sage: from sage.env import SAGE_SRC
sage: import os, sys, doctest
sage: filename = os.path.join(SAGE_SRC,'sage','doctest','reporting.py')
sage: DD = DocTestDefaults()
sage: FDS = FileDocTestSource(filename,DD)
sage: DC = DocTestController(DD,[filename])
sage: DTR = DocTestReporter(DC)
Now we pretend to run some doctests:
sage: DTR.report(FDS, True, 0, None, "Output so far...", pid=1234)
Timed out
**********************************************************************
Tests run before process (pid=1234) timed out:
Output so far...
**********************************************************************
sage: DTR.report(FDS, False, 3, None, "Output before bad exit")
Bad exit: 3
**********************************************************************
Tests run before process failed:
Output before bad exit
**********************************************************************
sage: doctests, extras = FDS.create_doctests(globals())
sage: runner = SageDocTestRunner(SageOutputChecker(), verbose=False, sage_options=DD,optionflags=doctest.NORMALIZE_WHITESPACE|doctest.ELLIPSIS)
sage: t = Timer().start().stop()
sage: t.annotate(runner)
sage: DC.timer = t
sage: D = DictAsObject({'err':None})
sage: runner.update_results(D)
0
sage: DTR.report(FDS, False, 0, (sum([len(t.examples) for t in doctests]), D), "Good tests")
[... tests, ... s]
sage: runner.failures = 1
sage: runner.update_results(D)
1
sage: DTR.report(FDS, False, 0, (sum([len(t.examples) for t in doctests]), D), "Doctest output including the failure...")
[... tests, 1 failure, ... s]
Now we can show the output of finalize:
sage: DC.sources = [None] * 4 # to fool the finalize method
sage: DTR.finalize()
----------------------------------------------------------------------
sage -t .../sage/doctest/reporting.py # Timed out
sage -t .../sage/doctest/reporting.py # Bad exit: 3
sage -t .../sage/doctest/reporting.py # 1 doctest failed
----------------------------------------------------------------------
Total time for all tests: 0.0 seconds
cpu time: 0.0 seconds
cumulative wall time: 0.0 seconds
If we interrupted doctests, then the number of files tested will not match the number of sources on the controller:
sage: DC.sources = [None] * 6
sage: DTR.finalize()
----------------------------------------------------------------------
sage -t .../sage/doctest/reporting.py # Timed out
sage -t .../sage/doctest/reporting.py # Bad exit: 3
sage -t .../sage/doctest/reporting.py # 1 doctest failed
Doctests interrupted: 4/6 files tested
----------------------------------------------------------------------
Total time for all tests: 0.0 seconds
cpu time: 0.0 seconds
cumulative wall time: 0.0 seconds
Report on the result of running doctests on a given source.
This doesn’t print the report_head(), which is assumed to be printed already.
INPUT:
EXAMPLES:
sage: from sage.doctest.reporting import DocTestReporter
sage: from sage.doctest.control import DocTestController, DocTestDefaults
sage: from sage.doctest.sources import FileDocTestSource, DictAsObject
sage: from sage.doctest.forker import SageDocTestRunner
sage: from sage.doctest.parsing import SageOutputChecker
sage: from sage.doctest.util import Timer
sage: from sage.env import SAGE_SRC
sage: import os, sys, doctest
sage: filename = os.path.join(SAGE_SRC,'sage','doctest','reporting.py')
sage: DD = DocTestDefaults()
sage: FDS = FileDocTestSource(filename,DD)
sage: DC = DocTestController(DD,[filename])
sage: DTR = DocTestReporter(DC)
You can report a timeout:
sage: DTR.report(FDS, True, 0, None, "Output so far...", pid=1234)
Timed out
**********************************************************************
Tests run before process (pid=1234) timed out:
Output so far...
**********************************************************************
sage: DTR.stats
{'sage.doctest.reporting': {'failed': True, 'walltime': 1000000.0}}
Or a process that returned a bad exit code:
sage: DTR.report(FDS, False, 3, None, "Output before trouble")
Bad exit: 3
**********************************************************************
Tests run before process failed:
Output before trouble
**********************************************************************
sage: DTR.stats
{'sage.doctest.reporting': {'failed': True, 'walltime': 1000000.0}}
Or a process that segfaulted:
sage: import signal
sage: DTR.report(FDS, False, -signal.SIGSEGV, None, "Output before trouble")
Killed due to segmentation fault
**********************************************************************
Tests run before process failed:
Output before trouble
**********************************************************************
sage: DTR.stats
{'sage.doctest.reporting': {'failed': True, 'walltime': 1000000.0}}
Report a timeout with results and a SIGKILL:
sage: DTR.report(FDS, True, -signal.SIGKILL, (1,None), "Output before trouble")
Timed out after testing finished (and interrupt failed)
**********************************************************************
Tests run before process timed out:
Output before trouble
**********************************************************************
sage: DTR.stats
{'sage.doctest.reporting': {'failed': True, 'walltime': 1000000.0}}
This is an internal error since results is None:
sage: DTR.report(FDS, False, 0, None, "All output")
Error in doctesting framework (bad result returned)
**********************************************************************
Tests run before error:
All output
**********************************************************************
sage: DTR.stats
{'sage.doctest.reporting': {'failed': True, 'walltime': 1000000.0}}
Or tell the user that everything succeeded:
sage: doctests, extras = FDS.create_doctests(globals())
sage: runner = SageDocTestRunner(SageOutputChecker(), verbose=False, sage_options=DD, optionflags=doctest.NORMALIZE_WHITESPACE|doctest.ELLIPSIS)
sage: Timer().start().stop().annotate(runner)
sage: D = DictAsObject({'err':None})
sage: runner.update_results(D)
0
sage: DTR.report(FDS, False, 0, (sum([len(t.examples) for t in doctests]), D), "Good tests")
[... tests, ... s]
sage: DTR.stats
{'sage.doctest.reporting': {'walltime': ...}}
Or inform the user that some doctests failed:
sage: runner.failures = 1
sage: runner.update_results(D)
1
sage: DTR.report(FDS, False, 0, (sum([len(t.examples) for t in doctests]), D), "Doctest output including the failure...")
[... tests, 1 failure, ... s]
If the user has requested that we report on skipped doctests, we do so:
sage: DC.options = DocTestDefaults(show_skipped=True)
sage: import collections
sage: optionals = collections.defaultdict(int)
sage: optionals['magma'] = 5; optionals['long time'] = 4; optionals[''] = 1; optionals['not tested'] = 2
sage: D = DictAsObject(dict(err=None,optionals=optionals))
sage: runner.failures = 0
sage: runner.update_results(D)
0
sage: DTR.report(FDS, False, 0, (sum([len(t.examples) for t in doctests]), D), "Good tests")
1 unlabeled test not run
4 long tests not run
5 magma tests not run
2 other tests skipped
[... tests, ... s]
Test an internal error in the reporter:
sage: DTR.report(None, None, None, None, None)
Traceback (most recent call last):
...
AttributeError: 'NoneType' object has no attribute 'basename'
Return the “sage -t [options] file.py” line as string.
INPUT:
EXAMPLES:
sage: from sage.doctest.reporting import DocTestReporter
sage: from sage.doctest.control import DocTestController, DocTestDefaults
sage: from sage.doctest.sources import FileDocTestSource
sage: from sage.doctest.forker import SageDocTestRunner
sage: from sage.env import SAGE_SRC
sage: filename = os.path.join(SAGE_SRC,'sage','doctest','reporting.py')
sage: DD = DocTestDefaults()
sage: FDS = FileDocTestSource(filename,DD)
sage: DC = DocTestController(DD, [filename])
sage: DTR = DocTestReporter(DC)
sage: print DTR.report_head(FDS)
sage -t .../sage/doctest/reporting.py
The same with various options:
sage: DD.long = True
sage: print DTR.report_head(FDS)
sage -t --long .../sage/doctest/reporting.py
Return a string describing a signal number.
EXAMPLES:
sage: import signal
sage: from sage.doctest.reporting import signal_name
sage: signal_name(signal.SIGSEGV)
'segmentation fault'
sage: signal_name(9)
'kill signal'
sage: signal_name(12345)
'signal 12345'