Generators for the Asymmetric TSP, Version 1.06

These are C programs for generators that will create the
random ATSP instances used in the paper

      The Asymmetric Traveling Salesman Problem:
       Algorithms, Instance Generators, and Tests

      J. Cirasella, D.S. Johnson, L.A. McGeoch, and W. Zhang

from the Proceedings of the 3rd Workshop on Algorithm
Engineering and Experimentation (ALENEX 01), Springer Lecture
Notes in Computer Science, to appear.

A postscript version of this paper can be downloaded from
http://www.research.att.com/~dsj/.

The generators have been modified by Alexey Zverovich to include
a flag (-tsplib) that will cause them to produce instances in TSPLIB
format.

Questions, comments, and bug reports should be addressed to
David Johnson <dsj@research.att.com>.

Included are the following files:

README		This file

amatgen.c	Instance generators (11 generators for 12 classes)
coingen.c	Generator names correspond to class names in the
cranegen.c		paper, except that rtiltgen was used to
diskgen.c		generate both the rtilt and rect instances.
rtiltgen.c
shopgen.c
smatgen.c
stiltgen.c
supergen.c
tmatgen.c
tsmatgen.c

match.c		Implementations of assignment problem algorithm
timedmatch.c		(Hungarian method) for use in machine benchmarking
			The second contains timing code that reveals the
			breakdown of total time into instance reading and
			matching times
			

symm.c		Program for converting an asymmetric instance with N cities
			into an equivalent 3N-city symmetric instance
			in TSPLIB format.

genrand.c	Portable random number generator files used for generators
genrand.h

amat100.2	Sample instances for each class
coin100.2
crane100.2
disk100.2
rect100.2
rtilt100.2
shop100.2
smat100.2
stilt100.2
super100.2
tmat100.2
tsmat100.2

Makefile	"make all" compiles all the generators and the
			(untimed) matching code	
testall		Shell script that tests that the generators generate
			the sample instances correctly

commands	Commands for generating the entire random testbed
apbounds	Assignment Problem lower bounds for all testbed instances
hkbounds	Held-Karp Bounds for all instances in the testbed
optvals		Currently known optimum tour lengths for instances in
		the testbed

WARNING: The total size of the testbed is over a gigabyte of storage.
For each class there are 24 instances: 10 with 100 cities,
10 with 316, 3 with 1000, and 1 with 3162.  Note that numbers of cities
go up by factors of roughly sqrt(10), while the memory requirements go up
by factors of 10.  Each 3162-city instance requires from 50 to 70
megabytes, so you may not want to generate all of these at once,
depending on your available disk space (or you may choose not to
test them at all).

If the -tsplib flag is NOT used, the format for the instance files
is as follows:

(1) a first line consisting of the number N of cities followed by the
    letter "A"

(2) N^2 lines, each consisting of one entry of the distance matrix in
    row major order

(3) A final line that gives the command by which the instance was
    generated.

The generators make use of a shift register random number generator
from Knuth Volume 2, whose details are in the file genrand.c.  This
random number generator is a bit more complicated than one might expect,
since it needs to meet the twin goals of (1) being portable (we hope) and
(2) enabling the instance generators to produce the instances used
in the paper, which were originally generated using a non-portable
approach.  (Thanks to David Applegate for doing the reverse engineering
and constructing a random number generator that meets these requirements.)

Note that the two generators that produce random instances closed under
shortest paths (tmatgen and tsmatgen) are quite slow, since for
simplicity they were implemented using repeated squaring rather than
an efficient shortest path algorithm.

--------------------

MATCHING CODE AND MACHINE BENCHMARKING

The Hungarian method implementation can be used for approximately
estimating relative speeds of machines.  We have two versions of
the code.  The first (timed match) reports both the time to read the
instance and the total time, as measured in clock ticks.  (Depending
on your machine, there are typically 60 or 100 clock ticks per second.)
The second version omits the detailed timing results, and is appropriate
for systems (Windows, etc.) that do not have the same timing primitives.)

A reasonable battery of timing tests would be:

time match rtilt100.0
time match rtilt316.10
time match rtilt1000.20
time match rtilt3162.30

For our 500 Mhz ES40 6/500 Compaq Alpha machine (60 ticks per second)
using gcc -O to compile, the timing test results and its reported
user time are as follows (running match rather than timedmatch will
yield the same overall user times without the breakdowns):

$ time timedmatch rtilt100.0
Cost= 7327438  Time=  2.00 ticks
Readtime=  1.00 ticks ( 50.0% )  Matchtime =  1.00 ticks ( 50.0% )
user    0m0.03s

$ time timedmatch /usr/dsj/atspi/instances/rtilt/rtilt316.10
Cost= 13659578  Time= 29.00 ticks
Readtime= 17.00 ticks ( 58.6% )  Matchtime = 12.00 ticks ( 41.4% )
user    0m0.50s

$ time timedmatch /usr/dsj/atspi/instances/rtilt/rtilt1000.20
Cost= 24146790  Time= 488.00 ticks
Readtime= 174.00 ticks ( 35.7% )  Matchtime = 314.00 ticks ( 64.3% )
user    0m8.13s

$ time timedmatch rtilt3162.30 
Cost= 42296468  Time= 15089.00 ticks
Readtime= 1739.00 ticks ( 11.5% )  Matchtime = 13350.00 ticks ( 88.5% )
user    4m11.48s

Note that readtime growth is quadratic, while matchtime growth is
close to cubic, something that might be obscured if one looks only
at total time.

--------------------

USING STSP CODE

If you want to try your symmetric TSP code on these instances, you can
convert them to TSPLIB STSP format using the program symm.c.  Usage:

symm ATSP-filename upperbound > STSP-filename

where "upperbound" is an upper bound on the optimal tour length.
The new instance has three times as many vertices, and incorporates
the standard transformation where each city is replaced by a chain
of three cities.

