The Greg testing framework

for Greg Version 1.4

February 2001

Richard Frith-Macdonald <[email protected]>


Table of Contents


Copying

See the file `COPYING.LIB'.

New in this release

New in 1.4

New in 1.3

New in 1.2

New in 1.1

New in 1.0

New in 0.7

New in 0.6

New in 0.5

New in 0.4

New in 0.3

New in 0.2

General news

This release of Greg provides a test framework much like that of DejaGNU, but also provides Guile modules to permit `embedded' testing of applications that use Guile as a scripting language and libraries that are directly accessible to Guile.

This release has been tested with Guile-1.3.4 and 1.3.3, it will certainly not run on versions earlier than 1.3.1

Apologies to Guile/Scheme/Lisp programmers out there - I came to this from Objective-C programming and taught myself during the four weeks that I wrote GNUstep-Guile and Greg - so my code is probably really ugly - but it does seem to work.

The code to run a child process in a pseudo-terminal works for GNU/Linux and SysV4.2 systems - it probably needs work to make it more portable - Please email bugfixes and comments to <[email protected]> or directly to me <[email protected]>

Greg in brief

Greg is a framework for testing other programs and libraries. Its purpose is to provide a single front end for all tests and to be a small, simple framework for writing tests. Greg leverages off the Guile language to provide all the power (and more) of other test frameworks with greater simplicity and ease of use.

The simplicity of the Greg framework makes it easy to write tests for any program, but it was specifically written for use with GNUstep-Guile to permit direct testing of the GNUstep libraries without the necessity to run a separate driver program.

The core functionality of Greg is a Guile module which can be loaded into any software with an embedded Guile interpreter. Any program which uses Guile as it's scripting language can therefore use Greg to test itself directly!

For testing external programs, Greg provides a compiled module that may be dynamically linked into Guile to permit you to run an application as a child process on a pseudo-terminal. In conjunction with the standard Guile `expect' module, this lets you test external programs.

Also provided is `greg' - a Guile script to invoke the Greg test framework in much the same way that runtest is used in DejaGNU.

All tests have the same output format (enforced by the greg-testcase procedure). Greg's output is designed to be both readable and readily parsed by other software, so that it can be used as input to customised testing processes.

Greg provides most of the functionality of DejaGNU but is rather simpler. It omits specific support for cross-platform/remote testing since this is really rather trivial to add where required and tends to vary from site to site so much that an attempt at a generic solution is pretty pointless. What Greg does do, is provide hooks to let you easily introduce site specific code for handling those sorts of situations.

The current version of Greg can normally be found on GNU ftp sites, with documentation online at http://www.gnu.org/software/greg/gregdoc.html

or, for the bleeding edge - availably by anonymous cvs as part of the GNUstep-Guile package in the GNUstep project -

CVSROOT=":pserver:[email protected]:/gnustep"
export CVSROOT
cvs login (password is `anoncvs')
cvs -z3 checkout guile

How to run a Greg testsuite

To run tests from an existing collection, try running

make check

If the check target exists, it usually saves you some trouble--for instance, it can set up any auxiliary programs or other files needed by the tests.

Alternatively, if you are in the top-level source directory of an existing testsuite (ie. there are subdirectories containing files with a `.scm' extension), you can get the `greg' script to test all the tools in the directory by typing -

greg

Finally, if you just want to run the tests that are in a specific file (or files), you can get the `greg' script to run them simply by listing the files on the command line.

greg a-file-to-run another-file-to-run

or, for verbose output -

greg --verbose a-file-to-run another-file-to-run

If you have a test suite that is intended to be used for `embedded' testing - You need to start the application to be tested, gain access to it's Guile command line (or other guile interface) and enter the commands -

(use-modules (ice-9 greg))
(greg-test-all)

A trivial example of a testcase

Each Greg test is a Guile script; the tests vary widely in complexity, depending on the nature of the tool and the feature tested.

;
; GNUstep-guile interface library test
;
; Create an object using the NSString [stringWithCString:] method and
; check that the resulting object is of the correct class.
;
(greg-testcase "The `stringWithCString:' method creates an NSString object" #t
(lambda ()
  (define obj ([] "NSString" stringWithCString: "Hello world"))
  (gstep-bool ([] obj isKindOfClass: ([] "NSString" class)))
))

Though brief, this example is a complete test. It illustrates some of the main features of Greg test scripts:

Here is the same example in a slightly different form - using the greg-expect-pass macro -

;
; GNUstep-guile interface library test
;
; Create an object using the NSString [stringWithCString:] method and
; check that the resulting object is of the correct class.
;
(greg-expect-pass "The `stringWithCString:' method creates an NSString object"
  (define obj ([] "NSString" stringWithCString: "Hello world"))
  (gstep-bool ([] obj isKindOfClass: ([] "NSString" class)))
)

Why Greg does what it does

Greg was written to support regression testing for the GNUstep libraries. It was inspired by an earlier test framework (by Ovidiu Predescu) that used DejaGNU along with a `driver' program (to make the calls to the library) and a suite of TcL scripts to control the driver.

There were three main problems (inherent in the nature of DejaGNU) with that approach -

So, something different was required, a test framework in a safer, simpler language that made it easy to create thin interfaces to libraries, so simplifying the task of producing testcases.

Of course, the good points of DejaGNU needed to be retained - clear output, Posix compliance, the ability to test separate programs as well as libraries.

A couple of additional goals seemed worthwhile -

Hopefully, Greg meets all its goals.

A POSIX conforming test framework

This section is copied almost directly from the DejaGNU documentation with minor modifications -

Greg is believed to conform to the POSIX standard for test frameworks.

POSIX standard 1003.3 defines what a testing framework needs to provide, in order to permit the creation of POSIX conformance test suites. This standard is primarily oriented to running POSIX conformance tests, but its requirements also support testing of features not related to POSIX conformance.

The POSIX documentation refers to assertions. An assertion is a description of behavior. For example, if a standard says "The sun shall shine", a corresponding assertion might be "The sun is shining." A test based on this assertion would pass or fail depending on whether it is daytime or nighttime. It is important to note that the standard being tested is never 1003.3; the standard being tested is some other standard, for which the assertions were written.

As there is no test suite to test testing frameworks for POSIX 1003.3 conformance, verifying conformance to this standard is done by repeatedly reading the standard and experimenting. One of the main things 1003.3 does specify is the set of allowed output messages, and their definitions. Four messages are supported for a required feature of POSIX conforming systems, and a fifth for a conditional feature. Greg supports the use of all five output messages; in this sense a test suite that uses exactly these messages can be considered POSIX conforming. These definitions specify the output of a test case:

PASS
A test has succeeded. That is, it demonstrated that the assertion is true.
UPASS
POSIX 1003.3 does not incorporate the notion of unexpected passes, so for strict POSIX, PASS, instead of UPASS, is returned for test cases which were not expected to pass but did. This means that PASS is in some sense more ambiguous than if UPASS is also used.
FAIL
A test has produced the bug it was intended to capture. That is, it has demonstrated that the assertion is false. The FAIL message is based on the test case only. Other messages are used to indicate a failure of the framework.
XFAIL
POSIX 1003.3 does not incorporate the notion of expected failures, so for strict POSIX, FAIL, instead of XFAIL, is returned for test cases which were expected to fail and did not. This means that FAIL is in some sense more ambiguous than if XFAIL is also used.
UNRESOLVED
A test produced indeterminate results. Usually, this means the test executed in an unexpected fashion; this outcome requires that a human being go over results, to determine if the test should have passed or failed. This message is also used for any test that requires human intervention because it is beyond the abilities of the testing framework. Any unresolved test should resolved to PASS or FAIL before a test run can be considered finished. Note that for POSIX, each assertion must produce a test result code. If the test isn't actually run, it must produce UNRESOLVED rather than just leaving that test out of the output. With Greg this is not a problem - any unexpected termination of a greg-testcase procedure will produce UNRESOLVED. Here are some of the ways a test may wind up UNRESOLVED:
UNTESTED
A test was not run. This is a placeholder, used when there is no real test case yet.

The only remaining output message left is intended to test features that are specified by the applicable POSIX standard as conditional:

UNSUPPORTED
There is no support for the tested case. This may mean that a conditional feature of an operating system, or of a compiler, is not implemented.

Greg uses the same output procedures to produce these messages for all test suites, and these procedures are already known to conform to POSIX 1003.3. For a Greg test suite to conform to POSIX 1003.3, you must set the variable greg-posix to be true (or run the `greg' command with the --posix flag). Doing this will ensure that non-posix extensions are not used.

Installing Greg

Requirements

Greg needs to have Guile installed. It should work with Guile-1.3 or later. You need to have the `guile' program in your path in order for the installation process to determine the proper locations for things.

You can get Guile from any GNU ftp site.

The current version of Greg can normally be found on GNU ftp sites, with documentation online at http://www.gnu.org/software/greg/gregdoc.html

or, for the bleeding edge - availably by anonymous cvs from the GNUstep project (http://www.gnustep.org/) -

CVSROOT=":pserver:[email protected]:/gnustep"
export CVSROOT
cvs login (password is `anoncvs')
cvs -z3 checkout guile/Greg

Building

To build Greg -

Type ./configure in the main Greg directory to configure for your system.

Once configuration is complete, go into the `Library' subdirectory and type make install to build and install things.

You should end up with -

A dynamic library, which can be dynamically linked into Guile with (if (not (feature? 'greg-pty)) (dynamic-call "scm_init_greg" (dynamic-link "libgreg.so")))

a module defining Guile procedures and variables providing the main test framework, which can be accessed using (use-modules (ice-9 greg)),

and a Guile script that you can use to run tests from the unix command-line (`greg').

You MUST install Greg before you attempt to use it (or run it's self-tests) because the Guile modules making it up must be in place in the standard Guile directories before Greg can work.

Once Greg is installed, you can type make check in the Tests directory to get Greg to test itself.

You can type make in the Documentation directory to build the documentation in info, html and dvi formats.

NB.
You must have the `makeinfo' program installed to build the documentation in info format
You must have the `texi2html' program installed to build the documentation in html format
You must have the `texi2dvi' program installed to build the documentation in dvi format

Problems

Greg is quite simple, so there is not much to go wrong with it. Of course, you must have a working copy of Guile installed, and you need to make sure you ran the configure script to configure Greg for your system, and installed Greg, but after that, most stuff should just work.

The single area where you are most likely to encounter problems is if you are using Greg to test external programs run in a child process using the (greg-child) procedure.

Please attempt to make a patch to fix things on your operating-system and send it to me - <[email protected]> or to <[email protected]>

Using `greg'

The Greg framework is designed to be used in two ways - as an embedded system from within any application which is linked with the Guile library, or stand-alone using the command-line `greg' driver script. For both of these methods of usage the test cases are written the same way and the expected output is the same.

Output

While Greg may produce more verbose output in response to various settings, the basic output of a test run is a series of lines describing the success/failure state of each testcase encountered, followed by a summary of all test cases.

In `normal' mode, only unexpected results are displayed, but in `verbose' output mode, results for all results are displayed.

`greg' flags the outcome of each test case with a line consisting of one of the following codes followed by a colon, a space, and then the testcase description.

PASS
The most desirable outcome: the test succeeded, and was expected to succeed.
UPASS
A pleasant kind of failure: a test was expected to fail, but succeeded. This may indicate progress; inspect the test case to determine whether you should amend it to stop expecting failure.
FAIL
A test failed, although it was expected to succeed. This may indicate regress; inspect the test case and the failing software to locate the bug.
XFAIL
A test failed, but it was expected to fail. This result indicates no change in a known bug. If a test fails because the operating system where the test runs lacks some facility required by the test, the outcome is UNSUPPORTED instead.
UNRESOLVED
Output from a test requires manual inspection; the test suite could not automatically determine the outcome. For example, your tests can report this outcome is when a test does not complete as expected.
UNTESTED
A test case is not yet complete, and in particular cannot yet produce a PASS or FAIL. You can also use this outcome in dummy "tests" that note explicitly the absence of a real test case for a particular property.
UNSUPPORTED
A test depends on a conditionally available feature that does not exist (in the configured testing environment). For example, you can use this outcome to report on a test case that does not work on a particular target because its operating system support does not include a required subroutine.

Files and directories

A Greg test run expects to find files and directories in a certain layout (modeled on that used by DejaGNU) - though it is possible to override this DejaGNU compatibility feature and simply run the tests in a list of files.

The test source directory (normally your current directory) is expected to contain one or more tool directories. Each tool directory should contain one or more test scripts. In fact any file in a tool directory which has a `.scm' extension is assumed to be a Guile test script.

When a normal Greg test run is done, Greg goes through each tool directory in turn and loads each test script in turn.

You may set the Guile variable greg-tools or use the --tool ... command-line option to specify a list of tools directories to use rather than assuming that all subdirectories are tool directories. If you do this, the tools are tested in the order in which they appear in the list rather than the default order (ASCII sorted by name).

You may set the Guile variable greg-files or use the --file ... command-line option to specify a list of file names to use rather than assuming that all `.scm' files in each tool directory are test scripts. If you do this, the files are loaded in the order in which they appear in the list. You may omit the `.scm' extension from filenames and Greg will supply it for you if necessary.

You may set the Guile variable greg-paths to specify a list of test files to be run directly, or simply list the files to be run on the command-line.

Doing this overrides the greg-tools and greg-files variables, and simply runs the files you list in the order you list them.

As a (minor) complication to this simple layout, Greg permits the use of `begin.grg' and `end.grg' scripts in both the main source directory and in each tool directory. These scripts permit you to add any initialisation and cleanup code you want. Typically (for non-embedded testing) you would use a `begin.grg' script to start the application to be tested.

If `begin.grg' exists in the main source directory, it will be loaded before any tools are tested.

If `end.grg' exists in the main source directory, it will be loaded after all the tools are tested.

If `begin.grg' exists in a tool directory, it will be loaded before any test scripts in that directory are loaded.

If `end.grg' exists in a tool directory, it will be loaded after all the test scripts in that directory are loaded.

NB. Even when you use the greg-paths variable to run one or more test files directly, the `begin.grg' and `end.grg' files in your current directory will be loaded.

Embedded usage

Greg is designed primarily for embedded usage - any application that uses Guile as it's scripting language should be able to use Greg to test itself.

To this end - Greg provides a Guile module containing definitions of various procedures (used to run tests) and variables (used to modify the behavior of a test run).

Before trying to use any part of Greg, You need to load the Greg module with (use-modules (ice-9 greg))

The main procedure to run Greg tests is (greg-test-run). You can use this to run tests in much the same way as the `greg' script is used to run tests from the command-line. The behavior of this procedure is modified by setting the following top-level variables -

Command-line usage

`greg' is the executable test driver for Greg. This is a Guile script that you can use to run tests from the command line. The command-line options that you can pass to `greg' are listed below.

`greg' returns an exit code of 1 if any test has an unexpected result; otherwise (if all tests pass or fail as expected) it returns 0 as the exit code.

This is the full set of command line options that `greg' recognizes.

greg --tool tool ...
[ --debug ]
[ --file script ... ]
[ --help ]
[ --objdir path ]
[ --outdir path ]
[ --posix ]
[ --srcdir path ]
[ -v | --verbose ]  [ -V | --version ]
[ files to run ]
--tool tool ...
tool specifies what set of tests to run. It provides a list of subdirectories (each corresponding to a tool) in which test scripts can be found. Initialisation code (including executable tool startup) for each tool may be placed in `begin.grg' in the appropriate tool subdirectory. Cleanup code may be placed in `end.grg'. For example, including --tool gcc on the `greg' command line runs tests from the gcc subdirectory. The order in which the tools are tested will be the same as the order in which the tool names occur on the command line.
--file script ...
Specify the names of testsuites to run. By default, `greg' runs all tests for the tool, but you can restrict it to particular testsuites by giving the names of the `.scm' Guile scripts that control them. You do not need to supply the `.scm' file extension - it will be assumed. testsuite may not include path information; use plain filenames. The order in which the test scripts are run will be the same as the order in which the script names occur on the command line.
--debug
Turns on the expect internal debugging output. Debugging output is displayed as part of the `greg' output, and logged to a file called `tests.dbg'. The extra debugging output does not normally appear on standard output.
--help
Prints out a short summary of the `greg' options, then exits (even if you also specify other options).
--objdir path
Use path as the top directory containing any auxiliary compiled test code. This defaults to `.'. Use this option to locate pre-compiled test code. You can normally prepare any auxiliary files needed with make.
--outdir path
Write output logs in directory path. The default is ., the directory where you start `greg'. This option affects only the log and the debug files `tool.log' and `tool.dbg'.
--srcdir path
Use path as the top directory for test scripts to run. `greg' looks in this directory for any subdirectory whose name matches any toolname (specified with --tool).
--verbose
-v
By default, `greg' shows only the output of tests that produce unexpected results; that is, tests with status FAIL (unexpected failure), UPASS (unexpected success), or ERROR (a severe error in the test case itself). Specifying --verbose to see output for tests with status PASS (success, as expected) and XFAIL (failure, as expected). It also causes a more detailed summary to be displayed.
Specifying --verbose more than once causes more detail to be displayed.
--version
-V
Prints out the version numbers of Greg, and Guile, then exits without running any tests.

Writing tests

getting started

The simplest way to get started is to write a file (say `myTests') and type greg --verbose myTests to run the tests in it.

Your file might contain code like -

; A simple test that basic arithmetic works

(greg-expect-pass "one plus one is two"
  (eq? (+ 1 1) 2)
)

(greg-expect-pass "one minus one is zero"
  (eq? (- 1 1) 0)
)

And would produce output like -

PASS: one plus one is two
PASS: one minus one is zero

# of testcases attempted   2
# of expected passes       2
# of expected failures     0
# of unexpected passes     0
# of unexpected failures   0
# of unresolved testcases  0
# of unsupported testcases 0
# of untested testcases    0

types of testsuite

There are three types of situation where Greg may be used as a test framework -

testsuite file layout

A testsuite for a tool is a directory containing one or more test script files and (optionally) `begin.grg' and `end.grg' files to handle initialisation and cleanup.

Each script file has a `.scm' extension and contains Guile (Scheme) code, but you do not need to know much about the Guile programming language to write most tests.

A script file will contain one or more testcases - each of which constitutes a test of a single well defined feature of the tool that the script is meant to test. A testcase is always written using the greg-testcase procedure, though this procedure could be invoked by a convenience macro.

greg-expect-pass

The greg-expect-pass macro is a shorthand method of writing the most usual sort of testcase - where a fragment of Guile code is run and is expected to return a true result. It passes an assertion and a fragment of Guile code that performs a test to the greg-testcase procedure -

; A simple test that basic arithmetic works

(greg-expect-pass "one plus one is two" (eq? (+ 1 1) 2))

is equivalent to -

(greg-testcase "one plus one is two" #t
  (lambda ()
    (eq? (+ 1 1) 2)
  )
)

greg-expect-fail

The greg-expect-fail macro is a shorthand method of writing a testcase to confirm that a known bug is still present in the code being tested. Once the bug is fixed, it would be altered to be a greg-expect-pass testcase.

; A test that basic arithmetic DOESN'T work!

(greg-expect-fail "one plus one is two" (eq? (+ 1 1) 2))

is equivalent to -

(greg-testcase "one plus one is two" #f
  (lambda ()
    (eq? (+ 1 1) 2)
  )
)

greg-testcase

The greg-testcase procedure takes three arguments -

The Guile programming language permits the thunk to return in four ways -

As there are no other ways in which the thunk may be exited, it is impossible for a testcase to produce a result that doesn't fit into the framework (unless your testcase manages either to crash Guile or enter an infinite loop - in which case you won't get any output).

The value returned by the greg-testcase procedure is a boolean - #t if the test resulted in an expected pass, #f otherwise.
You can use this return value to make the execution of subsequent testcases dependent on the success of an earlier testcase.


;
; A testcase to check an instance of numeric addition
;
(greg-testcase "One plus One is two" #t
(lambda ()
  (eq? (+ 1 1 ) 2)
))

;
;  The above testcase will generate output -
;  `PASS: One plus One is two'
;

The system provides hooks for general purpose procedures that are automatically called immediately before and after each testcase is executed. These procedures can be used to perform additional logging or other housekeeping functions. -

While a testcase is executing (or in the greg-case-begin or greg-case-end procedures) there are a number of public procedures that may be used to obtain information about the system -

Multiple testcases

It is normal to have more than one testcase in a file and this produces no problems - the only thing to watch out for is communicating information between testcases -

The scope of variables defined in the thunk in a greg-testcase procedure call is that thunk - the variable will not be visible to the next testcase.

So - to pass information from one testcase to the next it is necessary to define variables that can be seen in each testcase. The way to do this is normally to define these variables at the start of the file and then use the set! procedure within each testcase to set a value for a variable to be passed to the next testcase.


(define arith-ok #f)

;
; A testcase to check an instance of numeric addition
;
(greg-testcase "One plus One is two" #t
(lambda ()
  (if (eq? (+ 1 1 ) 2)
    (begin (set! arith-ok #t) #t)
    #f)
))

;
; A testcase to check arithmetic - only supported if we have addition.
;
(greg-testcase "X multiplied by 2 is X plus X" #t
(lambda ()
  (if arith-ok
    (eq? (+ 1 1) (* 1 2))
    (throw 'unsupported))
))

Of course, if (as above) the only information you want to pass from a testcase is whether the test succeeded or not, you can use the return value from the greg-testcase procedure directly -


(if
  (greg-testcase "One plus One is two" #t
    (lambda ()
      (eq? (+ 1 1 ) 2)
    )
  )
  (greg-testcase "X multiplied by 2 is X plus X" #t
    (lambda ()
      (eq? (+ 1 1) (* 1 2))
    )
  )
  (greg-dlog "Arithmetic operations not supported\n")
)

External tests

When Greg is used to test an external application, you usually want to run that application as a child process on a pseudo-terminal and handle tests sending a sequence of commands to the application and reading anticipated output from the application.

Starting a child process

Greg provides the greg-child procedure to start up a child process on a pseudo-terminal. You would usually call this procedure in the `begin.grg' file in your tool directory, but you could call it at the start of each script to get a new child process for each script.

The greg-child procedure expects one argument (the name of the program to be executed) followed by any number of additional arguments which are the arguments to be passed to the child process.
If the program name does not begin with a slash, Greg will look in the directory specified in greg-obj-dir to find it (by default the current directory).
If you want your normal PATH to be searched for the program, you should use -
(greg-child "/bin/sh" "-c" "foo args")
to get the shell to execute program `foo' with arguments args.

The greg-child procedure will automatically close down the I/O channels to any process previously started and wait for that process to die. If the old child process is badly behaved and will not die, this can result in Greg hanging - in this case you may need to explicitly kill the old child by another method before starting the new child process (this is one of the uses of the `end.grg' script).

As a special case, you can use an empty string as the program name - if you do this, another copy of the guile process will be created as a child and the value returned by greg-child in the child process will be a list containing the single number 0 (in the parent it will be a list containing the input port, output port and process id of the child). You can use this information to get the child copy of the process to be the program under test. This is useful for embedded testing where you want to test the I/O capabilities of the program.

NB. The greg-child procedure is implemented on top of the new primitive pty-child. This primitive is used to create a new child process on the end of a pseudo-terminal. Arguments and return values are as for greg-child.

Sending to a child process

The greg-send procedure is provided to send instructions to a child process. This procedure takes one or more string arguments and sends them to the child process (if one exists).

Reading from a child process

The greg-recv macro is used to read data from a child process. This procedure actually provides a simple front-end to the expect module. You can use the expect module facilities directly if you want more control than is offered by greg-recv.

The greg-recv macro expects one or more lists as arguments - each list containing a string (a pattern to match) and a result to return on a successful match. The value returned by greg-recv is the result for the pattern actually matched.
If no pattern is matched within the timeout period then an exception is raised, causing the testcase to return a FAIL result (unless you use (set! expect-timeout-proc xxx) to override Gregs timeout handler.
If no pattern is matched before an end-of-file is read, then an exception is raised, causing the testcase to return a FAIL result (unless you use (set! expect-eof-proc xxx) to override Gregs end-of-file handler.

NB. The expect-timeout-proc and expect-eof-proc are saved when a tool is tarted, and restored when it ends ... so if you want to make changes to these procedures in multiple tools, you must do so in the begin.grg and end.grg files for each tool.

In addition to setting up the above expect procedures, Greg also sets the expect-timeout variable to a 15 second timeout, and sets the expect-char-proc to be greg-dlog so that data read from the child process is logged as debug output by default. You can of course override this behavior in begin.grg.

The pattern matching is done with extended regular expressions, usually with input split into lines so that a carat (^) at the start of an expression matches the start of a line, and a dollar ($) matches the end of a line.
This is convenient for testing programs that produce lines of output in an expected format, as you can easily match the start and end of an output line.

If you want to change this behavior to permit multi-line patterns and to have the carat and dollar match the start of input and end of input respectively, then you can use -

(set! expect-strings-compile-flags regexp/extended)
(set! expect-strings-exec-flags 0)

This pattern matching behavior is occasionally useful when you are testing a program that produces output without clearly recognisable individual lines. NB. greg does not save and restore these values, so a change to them effect all tools being tested until you change them back.

A complete external test

;
; Run an interactive shell as a child process
;
(greg-child "/bin/sh" "-i")

;
; Set the shell prompt
;
(greg-send "PS1='% '\n")   

;
; Now test that the shell echoes what we expect.
; If we have a timeout or an eof, we will get a failure result.
;
(greg-testcase "echo 'hello'" #t
  (lambda ()
    (greg-send "echo hello\n")  ; Get it to send us something
    (expect-strings
      ("hello\r\n% " #t)
    )
  )
)

Index

Jump to: - - a - b - c - d - e - f - g - h - i - l - m - n - o - p - r - s - t - u - v - w - x

-

  • --debug (`greg' option)
  • --help (`greg' option)
  • --objdir (`greg' option)
  • --outdir (`greg' option)
  • --srcdir (`greg' option)
  • --tool (`greg' option)
  • --verbose (`greg' option)
  • --version (`greg' option)
  • -v (`greg' option)
  • -V (`greg' option)
  • a

  • a complete external test
  • ambiguity, required for POSIX
  • auxiliary test programs
  • b

  • begin.grg
  • building Greg
  • c

  • check makefile target
  • cleanup
  • command line options
  • d

  • debug log for test cases
  • debug variable
  • design goals
  • directories and files
  • e

  • embedded usage
  • end.grg
  • example
  • existing tests, running
  • exit code from `greg'
  • expected failure
  • external tests
  • f

  • FAIL, FAIL
  • failing test, expected
  • failing test, unexpected
  • failure, POSIX definition
  • files and directories
  • files variable
  • g

  • getting started writing a test
  • `greg' description
  • `greg' exit code
  • `greg' option list
  • Greg test driver
  • `greg', listing options
  • greg-case-begin
  • greg-case-end
  • greg-casename
  • greg-child
  • greg-debug
  • greg-directory
  • greg-expect-fail
  • greg-expect-pass
  • greg-filename
  • greg-files
  • greg-obj-dir
  • greg-out-dir
  • greg-paths
  • greg-posix
  • greg-recv
  • greg-send
  • greg-src-dir
  • greg-testcase
  • greg-toolname
  • greg-tools
  • greg-verbose
  • h

  • help with `greg'
  • i

  • initialisation
  • installing
  • invoking
  • l

  • log files, where to write
  • m

  • make check
  • module usage
  • multiple testcases
  • n

  • naming tests to run
  • News
  • o

  • obj-dir variable
  • object directory
  • option list, `greg'
  • options
  • out-dir variable
  • output directory
  • output, additional
  • overview
  • p

  • PASS, PASS
  • paths variable
  • POSIX conformance
  • posix variable
  • problems
  • r

  • reading from a child process
  • requirements
  • run test procedure
  • running
  • running tests
  • s

  • selecting a range of tests
  • selecting tests for a tool
  • sending to a child process
  • source directory
  • src-dir variable
  • standard conformance: POSIX 1003.3
  • starting a child process
  • success, POSIX definition
  • successful test
  • successful test, unexpected
  • t

  • test cases, debug log
  • test programs, auxiliary
  • test, failing
  • test, successful
  • test, unresolved outcome
  • test, unsupported
  • testcase
  • tests, running
  • tests, running specifically
  • `tests.dbg' file
  • tools variable
  • troubleshooting
  • turning on output
  • u

  • unexpected success
  • UNRESOLVED, UNRESOLVED
  • UNSUPPORTED, UNSUPPORTED
  • unsupported test
  • UNTESTED, UNTESTED
  • untested properties
  • UPASS, UPASS
  • UPASS, avoiding for POSIX
  • v

  • variables
  • verbose variable
  • version numbers
  • w

  • writing tests
  • x

  • XFAIL, XFAIL
  • XFAIL, avoiding for POSIX
  • Return to GNU's home page.

    Please send FSF & GNU inquiries & questions to [email protected]. There are also other ways to contact the FSF.

    Please send comments on these web pages to [email protected], send other questions to [email protected].

    Copyright (C) 2000 Free Software Foundation, Inc.

    Verbatim copying and distribution of this entire article is permitted in any medium, provided this notice is preserved.


    This document was generated on 12 February 2001 using texi2html 1.56k.