Xamin API Guide

Xamin API Users Guide

The Xamin catalog and archive interface allows users to query HEASARC catalogs, plot results and download associated data products. This document describes how to use the web-based application programming interface (API) and the corresponding command line interface (CLI). The rest of this document describes how to use these interfaces. For readability, this document presents the arguments as you would specify them on the command line, e.g.,

    runquery key1=value1 key2=value2
but identical arguments can be specified in the API HTTP request as

Installation (CLI only)

The Xamin command line interface is distributed as a very small Java executable Jar file, users.jar. Users will need a version of the Java runtime environment (JRE, version 1.6 or higher is recommended), to run this file. If a JRE is installed, then on Windows machines (and perhaps other environments), the file can be executed directly. In Unix the Jar is run using the command.
   java -jar jarname.jar command-line-arguments
Many shells have facilities available to assign a more natural name to the command. E.g., many Linux shells have an alias command like
   alias runquery='java -jar jarname.jar'
After setting the alias the user can simply execute
   runquery command-line-arguments
The exact syntax may differ on your system.

To use Xamin, you will need to download the users.jar file. This file is available in Xamin's distribution directory. E.g., if the Web interface is at

you can download the user jar file from

The HEASARC provides a version of users.jar . If you wish to access an Xamin service somewhere other than the standard address, then the XAMIN_CLI_URL environment variable should be set to location of the URL that receives Xamin CLI requests. By default this is http://heasarc.gsfc.nasa.gov/xamin/cli. If a test version of Xamin were installed at http://test.mysite.edu:8080/mydb, then the URL would be http://test.mysite.edu:8080/mydb/cli.


The command line interface to Xamin uses the concept of settings. Settings are just a mapping from case insensitive string keys to string values. In some cases a setting may have multiple values. A user can specify settings on the command line with
     runquery key1=value1 key2=value2
The syntax
    runquery key1=value1a key1=value1b key2=value2
is equivalent to
    runquery key1=value1a,value1b key2=value2
I.e., you can specified multiple values for setting either by repeating the key or using commas in the settings string. If values include spaces or other characters that may be interpreted by the command shell, then the user should escape them using whatever techniques are appropriate, e.g., enclosing the value within quotes. In some cases if a user wishes to include a comma within a settings value if can be escaped using a backslash. Note that some care may be needed in ensuring that the backslash is sent to command and not swallowed or interpreted by the shell. Similarly sometimes you need to ensure that quotes are actually seen by the Xamin CLI processor. For some shells a syntax like runquery key=\'quotedvalue\' is then appropriate, and we use that syntax in examples below.

Simple queries

To run a query try
    runquery table=messier
This will do a query of the entire Messier catalog using the default format and output fields. The results might look like
name |alt_name  |ra     |dec   |constell|dimension|vmag|vmag_uncert|class
M 7  |NGC 6475  |17 53.9|-34 49|SCO     |    80   | 3.3|null       | 3600
M 69 |NGC 6637  |18 31.4|-32 21|SGR     |     7   | 7.7|null       | 3080
M 70 |NGC 6681  |18 43.2|-32 18|SGR     |     8   | 8.1|null       | 3080
... 103 lines deleted ...
M 52 |NGC 7654  |23 24.2| 61 35|CAS     |    13   | 6.9|null       | 3600
M 81 |NGC 3031  |09 55.6| 69 04|UMA     | 26X14   | 6.8|null       | 6200
M 82 |NGC 3034  |09 55.8| 69 41|UMA     |  11X5   | 8.4|null       | 6600

Number of rows:    109
Number of columns: 9

A very common query is to ask whether a table has information about a given position. You can make that query as

    runquery table=rosmaster position=3c273
    runquery table=rosmaster position=187.25,2.05
    runquery table=rosmaster position='12 29 06.70,02 03 08.6'
In the last example the quotes are included for the local shell and are not seen by the command. Conventionally a comma can be used to separate the ra and dec (or lat and lon) fields of the coordinates though it is not always required.

You can ask for more than one position in a query.

    runquery table=rosmaster position='3c273;3c279;abell 1656'
Since positions may include commas internally a semicolon is used as separator.

We may wish to put constraints on the rows we get out. E.g., if we want to skip short ROSAT exposures we might try:

    runquery table=rosmaster position=3c273 radius=10 constraint=exposure>10000

Complex Queries

While simple queries involve only one table, many queries combine information from multiple tables. Queries may involve correlations (joins in database parlance) where each row in the output includes columns from rows in different tables that have been matched in some way. E.g., we might perform a positional correlation between two tables looking for those targets that have been observed by two different observatories. Each row of the result comes from the same set of tables. To specify a correlation you just specify multiple tables in the query and indicate how the tables are to be joined. E.g.,
    runquery table=rosmaster,ascamaster offset=a:b:2
This query looks for ROSAT observations where there is a nearby ASCA observation (the ASCA center is within 2' of the ROSAT center). The 'a' and 'b' aliases refer to the tables in the query in the order they were specified (but see below for uploads).

We can add a constraint to a correlation too.

   runquery table=rosmaster,ascamaster constraint='a.exposure>b.sis_exposure' offset=a:b:30
Or we can ask for a looser correlation but require that the ROSAT observation be longer than the ASCA SIS observation. By default correlations only return data from the first table in the correlation. If we want to show fields from both tables, we might have:
   runquery table=rosmaster,ascamaster constraint='a.exposure>b.sis_exposure' \
The FIELDS setting lets us request specified fields (or expressions) for output.

There are other ways to build queries that involve more than one table.

In a union each row is from a single table, but different rows may come from different tables. Basically we run the same query in different tables in succession. E.g., we might want to know all targets that have been observed by any of a set of telescopes. Unions require that the fields being used in the union be identical among the tables in the union. To find all of the ASCA and Chandra observations made of a given Messier object we could try:

          runquery table=messier,ascamaster uniontable=ascamaster union=chanmaster \
            fields=a.name,b.ra,b.dec offset=a:b:30
This query first matches the Messier objects against the ascamaster table, and then against chanmaster. Note that if any fields from the table that is replaced in the union are used as constraints or display fields, they must have the same name and type in all of the tables used in the union. E.g., we cannot display the target name in this union because it is called target_name in ascamaster and just target in chanmaster.

Anticorrelations are a special case of correlations. Here rather than finding pairs of tables that match we look for rows in the primary table (or tables) that do not match any row in the anticorrelate. Since there is no matching row, we cannot include any information from the anticorrelating table in the output fields. To find the Messier objects that have not been observed by XMM we might try:

         runquery table=messier,xmmmaster offset=a:b:30 anticorr=xmmmaster
When doing an anticorrelation no fields from the anticorrelating table may be used in the FIELDS setting -- how could one display information from the table we don't have information from! -- but fields from that table may be used in constraints. E.g., we might want to find Messier objects that have not been seen in long XMM observations -- we don't care if they have been seen in short ones. To find those we could add constraint=b.duration>20000 to the previous query. It may be counterintuitive, but adding this constraint actually increases the number of rows returned, since we are now anticorrelating with only a portion of the xmmmaster catalog.

Another special class of correlations are autocorrelations, where we match a table against itself. E.g., if we have a table of the many observations of many sources, then we might ask which sources have observations which show large changes in the flux. We might constrain the result to find pairs of rows where the target id is the same, but the magnitudes differs by more than some large value. Here's another simple example. Let's find pairs of nearby Messier objects:

          runquery table=messier,messier offset=a:b:30 \
              fields=a.name,b.name showoffsets constraint='a.name < b.name'
The offset looks for pairs that are within 30' which the constraint ensures that we don't return a row matching itself. Note that by using a.name < b.name rather than a.name != b.name in the constraint we also ensure that we don't get two matches for each pair with the order of the match just reversed. We add in SHOWOFFSETS, so that we can see how far apart the pairs are.

Xamin supports all of these kinds of queries and allows various combinations. Very complex queries may need to be built up using intermediate tables to store results. The table below suggests the keywords that are needed to build complex queries. In the Query column we give a symbolic representation of the query for nominal tables A,B,C,D. Here '|' indicates a union, 'x' indicates a correlation and '^' indicates an anticorrelation. In the options where correlations are involved, we've generally included example offset keywords, but it is possible to do correlations between tables in a number of different ways so these should not be considered definitive.

Description Query Options
Single table A table=A
Correlation AxB table=A,B offset=A:B:size (or other constraint)
Anticorrelation A^B table=A,B offset=A:B:size anticorr=B
Autocorrelation AxA table=A,A constraint=...
Anticorrelation of correlated tables (AxB)^C table=A,B,C anticorr=C offset=A:B:size1,A:C:size2
Simple union A|B|C table=A union=B,C
Correlation with union (A|B|C)xDxE table=A,D,E offset=A:D:size1,A:E:size2 uniontable=A union=B,C
Anticorrelation against union A^(B|C|D) table=A,B uniontable=B union=C,D anticorr=B offset=A:B:size
Union of anticorrelations (A^B)|(A^C)|(A^D) table=A,B uniontable=B union=C,D anticorr=B offset=A:B:size showanti
Anticorrelation of union (A|B|C)^D table=A,D offset=A:D:size union=B,C uniontable=B anticorr=D
Correlation with a union followed by anticorrelation (Ax(B|C))^D Run as two queries. E = Ax(B|C), then E^A

Tables and Table Uploads

Users can query one or more tables in a given query. If multiple tables are given, then the query is of the join between the two tables. Almost always that involves some kind of correlation between the two tables often, though not always a spatial join. There are three two settings that define tables in a query: TABLE and UPLOAD. The TABLE setting is used to reference a table that is already in the database. UPLOAD is used to specify that a file in some supported format is to be uploaded temporarily into the database. Unless a SAVEAS setting is specified, the uploaded table will be deleted immediately after the query finishes.

Currently uploads can be simple source lists (with one position per row), CSV files, VOTables or the HEASARC's own TDAT format. The upload looks at the first few characters to determine the type or it can be specified by the file type.

E.g., suppose the file mysources.txt has the content:

    abell 1656
    10 10 10, -20 24
    12.98, 14.87
We can run a query
    runquery table=rosmaster upload=mysources.txt offset=a:b:30
which will look at the rosmaster table for all observations within 30 arcminutes of the four positions specified in the mysources.txt file. Note the OFFSET setting which indicates that the offset between the 'a' table and the 'b' table should be less than 30 arcminutes. The 'a'-'z' prefixes or aliases are assigned in order first to any tables specified in the query and then to any uploads. E.g., suppose you have a VOTable of a set of interesting Spitzer observations in spizter.xml, and you also have a list of sources you are interested in in mysources.txt. You can ask if any of your sources have been seen by all of ROSAT, Chandra and Spitzer using
   runquery table=rosmaster,chanmaster \
Here table a is rosmaster, b is chanmaster, c is the table uploaded in spitzer.xml and d is your source list. You can think of this command as first finding all of the ROSAT observations that are within 30' of your sources and then seeing whether there are Chandra and Spitzer observations near the ROSAT observation. If the OFFSET had be specified as
the results might be slightly different. Here the positional matching is all done to the source list rather than the ROSAT observations.

Often you will want to upload a table and save the results for future queries. that is easy if you have an Xamin account.

User accounts

The command line interface supports Xamin user accounts. If the USER and PASSWORD settings are specified then user tables can be queried or created. The initial setup of a user account is best done using the Web interface since account validation usually requires responding to a mail message.

If you have an Xamin user account, then you can access the tables from that account exactly like system tables except that you prefix their names with the account name.

   runquery user=xxx password=yyy table=chanmaster,xxx.savedquery offset=a:b:10
runs a cross-correlation between the chanmaster table and a table that the user had saved earlier. The MYTABLES setting can be used to view user tables and the DELETE setting can be used to delete a user table. E.g.,
   runquery user=xxx password=yyy mytables
will return a list of your tables.
   runquery user=xxx password=yyy delete=atable
will delete the table xxx.atable

Saving uploads and query results

By default a table (or source list) is uploaded as a temporary table and is destroyed as soon as the current query finishes. If you wish to save an uploaded table then specify the saveas setting as part of the upload. If you do not wish to view the table as part of the upload you can also set format=none to suppress the display of the uploaded table. E.g.,
    runquery upload=mytable.xml
will display the contents of mytable.xml but not save it. If this table has a mag column you might wish to filter it
   runquery  upload=mytable.xml constraint='mag>10'
to see only the fainter rows.

If you want to save the table you are uploading for future use then

   runquery user=xxx password=yyy upload=mytable.xml saveAs=bigstars system
will display the upload and save it. The SYSTEM setting ensures that certain system columns that are likely to be useful in future queries are included. If you already know what the table looks like you can suppress the output with
   runquey user=xxx password=yyy upload=mytable.xml saveAs=bigstars system format=none

Note that the saveAs command saves whatever the query results would be. It saves the filtered table and only the fields in the table you specify.

It is an error to attempt to overwrite a table using SAVEAS. Use the DELETE setting first.

Any query can be saved using SAVEAS. Sometimes when doing complex queries it can be faster to break the query into pieces and save the intermediate results. If you have some filtered version of a table -- or perhaps some correlation result -- you can save that as your own table.

To save a personal version of only the northern Messier objects we might try

query user=xxx password=yyy table=messier constraint='dec>0' saveas=northmessier system
which will create the table xxx.northmessier.

You can specify an existing table with with saveas. Xamin will attempt to append the results of the current query to the existing table. The fields being retrieved must be the same. E.g., you might have a very large table that times out when you try to upload it. Break it up into pieces and upload it in sections.

Query constraints

A typical query involves constraints on the table or tables being queried: only rows that meet specific criteria should be returned. This section discusses how constraints may be placed on queries. There is special support for positional and temporal queries since these are so common. Generic constraints can be specified on any column in the table. Users can also build complex queries using SQL aggregate functions.
Positional Constraints
Positional constraints can be specified using the POSITION and RADIUS settings or for multiple table queries using the OFFSET setting. The user can specify one or more positions in the sky with POSITION: only rows where the primary position is within the specified RADIUS of any of the positions are returned.

If a single position was specified, then the SHOWOFFSETS setting can be used to include an offset column between the specified position and the position in the current table. This offset column is not available when multiple positions are specified: it would be unclear which of the input targets the offset would refer to. If you need to get offsets from multiple positions, upload a SOURCES list and do a cross-correlation with that resulting table.

If multiple tables have been specified in the input, then the POSITION constraint applies to the first table that has position fields. If no RADIUS setting is given, then a table-dependent default is used. For observation tables the default is a typical size of the observation. For object catalogs the default may be the uncertainty in the position.

The radius constraint can be specified as a expression and not just a constant. E.g., the table might have a column, pos_err giving the positional error of the each row. One could then specify

    position=3c273 radius=2*pos_err
to ask for any objects within twice the position error of 3c273.

If an expression is used for the RADIUS setting it can sometimes be desirable to add a second constant RADIUS constraint that provides some maximum value. The can significantly improve query performance. E.g., if the maximum position error in our table is 30", then we might specify

   position=3c273 radius=2*pos_err,1
The second value for the radius indicates that we must be within a 1' (i.e., 60") of the source. Multiple radius settings are often useful when querying the position master tables.

Positional cross-correlations

When multiple tables are being queried, users normally wish to do a positional cross-correlation between the tables. The OFFSET keyword is used to specify this constraint. Each OFFSET specifies a constraint between two tables. The offset is specified as three tokens
where the first two fields give the aliases for the two tables on which the constraint is being applied, and the third gives the maximum separation allowed between the two tables. This third field can be default to the table-dependent default search radius where the first table's default is used. If more than two tables are being used in the query then multiple offset fields may be used. E.g.,
    runquery table=rosmaster,chanmaster,ascamaster \
requests data where there are concident ROSAT, Chandra and ASCA observations where the center of the Chandra observation is within 10' of the ROSAT observation and the ASCA observation is within 30' of the ROSAT observation. An offset can pair any two tables in the query (so long as the table has positional information).

When you make an N-table query, the program may give you a warning if it does not see at least N-1 offset values. This can mean that you forgot to constrain one of the tables involved. You can ignore this warning if you understand the results you are getting. E.g., you may have joined the tables using a non-positional constraint.

If the SHOWOFFSETS setting is specified, then a column indicating the offset between the two tables is included for each offset constraint requested. The name of the column gives the aliases of the tables involved.

Temporal Constraints
HEASARC tables store times using integer and floating point values using Modified Julian Days (MJD). Users can query columns directly but the TIME and TIMEDELTA settings provide some special support for temporal queries. Users can specify time instants and ranges using the TIME setting in either Julian Days, MJD or ISO formats. Generic constraints on temporal columns can only use MJD. Also, the names of the columns used to store times differs among tables. Using the TIME setting, the program examines the table's metadata and finds the start and stop time fields and specifies the proper query.

When users specify a specific time rather than a time range, then this time is normally extended into a range by adding a TIMEDELTA tolerance which defaults to 1 day. To query for coverage of an instant, you also specify TIMEDELTA=0.

Time ranges are generally specified using the .. operator. E.g., so include all of 2003 one might specify


The input format used for the ISO style format is officially

but the parsing used is fairly forgiving. Note that the date specified without a time represents the beginning of the day. To query for a given day a user might try
or alternatively
    time=2003-10-17T12:00 timedelta=0.5
The first specifies a range explicitly, while the second species an instant in the middle of the day and uses the TIMEDELTA keyword to expand that instant to the full day.

Temporal cross-correlations

Currently there is no special support for temporal cross-correlations but they can easily be made using generic constraints (see below). E.g., if we want to find overlapping ROSAT and ASCA observations that started within a day of one another:
   runquery table=rosmaster,ascamaster ...
where we might expect that there are some other constraints on the query as well. Note that the start time fields for the two tables differ.
Generic Constraints
The CONSTRAINT setting is used to add any constraint to the query the can be expressed as an SQL logical expression. Any number of constraints can be placed on the query. Constraints can refer to any number of tables within the query -- and even to to tables not directly in the query.

A simple contraint might be no more than

[where the quotes are to escape the > in the expression] asks for fainter objects. If the user wanted a range of magnitudes
    constraint='bmag between 10 and 12'
A range operator .. can also be used as a shorthand here.
    constraint='bmag=10 .. 12'

If this is a correlation we might want to find only those rows where the magnitude appears to change between two tables then

    constraint='abs(a.bmag-b.bmag) > 1'
This assumes that both tables have bmag column.

Some special handling is allowed for string matches. If a constraint is given as

and the string xxx contains an '*', then this is assumed to be a wildcard search. It translates to the SQL
    col like 'xxx'
rather than col='xxx'.

When doing string queries CLI users usually need to specify the quotes to designate string constants. The constraint

looks for rows where the column pi_lname has the same value as the column white. The constraint probably should be specified as
The syntax given is appropriate for many Linux shells and makes sure that the Xamin CLI sees the string pi_name='white' -- i.e., the quotes are not processed by the shell.
Aggregate Based Constraints
The Xamin CLI allows the use of the GROUP setting to aid users who wish to use SQL aggregate functions (e.g., avg, count, stddev, min, max) functions in doing analysis of a table. If the user wishes to apply a constraint that involves one of these aggregate functions then it should be specified in the CONSTRAINT setting as with other non-aggregate constraint. E.g., suppose we want to find all of the Messier objects that have been seen by Chandra at least 10 times. We might have
   runquery table=messier,chandra fields=a.name,'count(*)' \
         group=a.name constraint='count(*)>10' \
When an aggregate function is found, the constraint is added to the HAVING clause of the SQL query rather than the WHERE clause where ordinary constraints are applied.

Special Tables

There are several special tables in the HEASARC database that are useful for specific kinds of queries. The master tables can be used for discovery queries that determine which tables might have information of interest. The metadata table supports all of the HEASARC's metadata.
Query the master tables
The master tables are simple tables that store the time and position information from each of the HEASARC's constituent tables plus a link back to the original table. The only other column they have is an indication of the default search radius appropriate for that row (almost always a function of the original table). One can very quickly query the master tables to find out which standard tables have information at a given location or time. E.g., to find out all the tables with information on 3c273
    query table=pos_small position=3c273 fields=table_name,'count(*)' \
       group=table_name radius=dsr,60
will do the trick.

To speed up access, the master tables are broken up into two groups. The pos_small and pos_time_small table includes all of the information in the HEASARC where the default search radius equal to or less than a degree, while pos_big and pos_time_big includes those with default search radii that are greater. The very large default search radii of these few tables significantly slow down queries where we are looking to see if a given position in within the default search radius of a row. The ill-defined positions of the data in pos_big is also rarely of interest to users looking for data in specific locations. So typically users will only wish to query the pos_small table.

If only temporal queries are being made, then the time table can be used. E.g.,

    query table=master_table.time time=1990-10-10..1990-10-12 \
       fields=table_name,'count(*)' \
There are a total of five master tables. These are all in the master_table schema and must be referenced as master_table.xxx when queried.
pos_small All rows that have positional information with a positional error less than 1 degree.
pos_time_small All rows that have both positional and temporal information with a positional error less than 1 degree.
pos_big All rows that have positional information with a position error greater than 1 degree.
pos_time_big All rows that have position and temporal information with a position error greater than 1 degree.
time All rows with temporal information. No positional information is included in the table
Generally for queries of position alone, the pos_(small|big) tables should be used. For queries involving position and time pos_time_(small|big) tables can be used. These are faster, since they have many fewer rows. The time table should be used for time-only queries.
Querying Metadata
The metadata for Xamin is stored in the table metainfo. If a user account is being used, then the the table user.metaview is a view that includes both the user and system metadata. These tables can queried like any other. A few useful queries are suggested here:

What tables are in the system?

   runquery table=metainfo distinct fields=name sortvar=name constraint=type=\'table\'
What are the parameters for the table xxx?
   runquery table=metainfo distinct fields=value sortvar=value
What tables do we have that include information in the radio regime?
    runquery table=metainfo fields=name sortvar=name \
What fields in ROSMASTER have a unit of mjd?
    runquery table=metainfo constraint=type=\'parameter\' \
       constraint=name=\'rosmaster.*\' \
       constraint=relation=\'unit\' \
       constraint=value='\mjd\' \
In this last query we split up the constraints to show them separately rather than putting all four constraints into a single setting.

Output Columns

The columns included in the output are controlled by the three settings FIELDS, SHOWOFFSETS and SYSTEM. The FIELDS setting can be used to request that a standard set of columns or all columns for a given table be included, or it can specify an explicit list of columns or expressions. E.g,
    fields=name,ra,dec,'vmag-bmag as color'
asks for four columns. The last is a difference of two columns and we can use the standard SQL syntax to give that column a name -- though that is not required.
would work fine too, but would use whatever the database comes up with (perhaps a blank) as the column title.

The SHOWOFFSETS setting requests that where possible the system add in a column giving the offset between the table row and the requested position or between two table rows. There can be multiple offset columns in a given query when doing correlations.

Internally Xamin tables have some special columns added to them. An __row is a simple integer index of the table. If the table has positional columns than a set of unit vector columns is created with names like __x_ra_dec, __y_ra_dec, __z_ra_dec. The initial x,y,z indicates the element of the spatial unit vector, while the ra and dec indicate the coordinate columns these unit vector columns relate to. Currently unit vectors are only created for RA and Dec columns.

The position columns are used for positional queries, so that such queries will not be possible if these queries are deleted. When copying a table result using the SAVEAS, specifying SYSTEM ensures that these system columns are also included.

Output Formats

The FORMAT setting is used to control the output format used in displaying the data. The default aligned format is a simple ASCII format where columns are padded to be the same size for all rows and pipes ('|') are used to delimit columns. A stream format foregoes the padding which means that results can be written out immediately as they are read from the database.

An EXCEL format provides a simple spreadsheet result. It uses the .xls style files rather than the new Excel XML formats.

FITS data can be written out a as a binary table. A dummy header HDU is prepended to make the output a valid FITS file.

The VOTable XML format is fully supported. If this format is selected an XML XSLT stylesheet can be used for further processing, but only if the HDB jar can be used directly.


Some of these output formats support display of data products. The PRODUCTS setting is used to indicate that display of products is desired. It can be given without a value in which case the base products for the tables being queried are displayed, or a specific set of products can be chosen. Products are displayed in format specific fashion. Currently products are supported in the aligned and stream formats for CLI queries.  In the VOTable format products are supported using DataLink resource descriptors which point the the HEASARC's DataLink service.

E.g., suppose we are interested in the data associated with an Chandra observation of  Ty Pyx

If we simply execute the command:
   runquery table=chanmaster position='ty pyx'

we get

obsid|status  |name            |ra         |dec        |time               |detector|grating|exposure|type|pi       |public_date
  601|archived|TY PYX (HD77137)|08 59 42.70|-27 48 58.7|2001-01-03T02:24:29|ACIS-S  |HETG   |  49760.|GTO |Canizares| 2002-02-21

Number of rows:    1
Number of columns: 12

---- Messages ----
Info: Search Radius
Search radius: 21' from chanmaster
Info: Name resolution
Position ty pyx corresponds to J2000 decimal coordinates:134.922083 -27.822222

We're not interested in the messages (the bits that follow Info:) so we add messages=none but we want to see what products are available so we add products.  We run

  runquery table=chanmaster position='ty pyx' messages=none products

and we get
obsid|status  |name            |ra         |dec        |time               |detector|grating|exposure|type|pi       |public_date|__dp|__p_chanmaster_link_ascamaster1_dec|__p_chanmaster_link_ascamaster1_ra|__p_chanmaster_chandra_cxc_abs_root             |__p_chanmaster_chandra_cxc_abs_obsid|__p_chanmaster_link_xmmmaster_dec|__p_chanmaster_link_xmmmaster_ra|__p_chanmaster_point_bib_id|__p_chanmaster_point_bib_table|__w_chanmaster_point_bib|__p_chanmaster_chandra_obs_root         |__p_chanmaster_link_xteindex_ra|__p_chanmaster_link_xteindex_dec|__p_chanmaster_link_rosmaster_ra|__p_chanmaster_link_rosmaster_dec
  601|archived|TY PYX (HD77137)|08 59 42.70|-27 48 58.7|2001-01-03T02:24:29|ACIS-S  |HETG   |  49760.|GTO |Canizares| 2002-02-21|5772|                           -27.8163|                          134.9279|http://cda.harvard.edu/chaser/viewerContents.do?|                                 601|                         -27.8163|                        134.9279|                        601|chanmaster                    |true                    |/FTP/chandra/data/science/ao02/cat2/601/|                       134.9279|                        -27.8163|                        134.9279|                         -27.8163

#  Product link.rosmaster: ROSAT: Nearby (1 deg) ROSAT Observations
  >  query?table=rosmaster&position=134.9279,-27.8163&radius=60

#  Product link.xteindex: RXTE: Nearby (1 deg) RXTE Observations
  >  query?table=xteindex&position=134.9279,-27.8163&radius=60

#  Product link.xmmmaster: XMM: Nearby (1 deg) XMM-Newton Observations
  >  query?table=xmmmaster&position=134.9279,-27.8163&radius=60

#  Product point.bib: HEASARC page for pointing to ADS
  >  xamin/bib?table=chanmaster&id=601

#  Product link.ascamaster1: ASCA: Nearby (1 deg) ASCA Observations
  >  query?table=ascamaster&position=134.9279,-27.8163&radius=60

#  Product chandra.obs: Chandra Observation
  >  /FTP/chandra/data/science/ao02/cat2/601/

#  Product chandra.cxc.abs: Chandra Proposal Abstract
  >  http://cda.harvard.edu/chaser/viewerContents.do?operation=propAbstract&obsid=601

Number of rows:    1
Number of columns: 27

This has a lot of extra columns that are used only for products that show how we'd query other tables near the same position or how to get the bibliographic information associated with this observation..  We're only interested in the actual file products so we add noproducts=link,point to get rid of links and bibliographic data.  We get a more tractable output:

runquery table=chanmaster position='ty pyx' messages=none products noproducts=link,point

which returns
obsid|status  |name            |ra         |dec        |time               |detector|grating|exposure|type|pi       |public_date|__dp|__p_chanmaster_chandra_cxc_abs_root             |__p_chanmaster_chandra_cxc_abs_obsid|__p_chanmaster_chandra_obs_root
  601|archived|TY PYX (HD77137)|08 59 42.70|-27 48 58.7|2001-01-03T02:24:29|ACIS-S  |HETG   |  49760.|GTO |Canizares| 2002-02-21|5772|http://cda.harvard.edu/chaser/viewerContents.do?|                                 601|/FTP/chandra/data/science/ao02/cat2/601/

#  Product chandra.cxc.abs: Chandra Proposal Abstract
  >  http://cda.harvard.edu/chaser/viewerContents.do?operation=propAbstract&obsid=601

#  Product chandra.obs: Chandra Observation
  >  /FTP/chandra/data/science/ao02/cat2/601/

Number of rows:    1
Number of columns: 16

Here we see that the data for this observation is in the directory /FTP/chandra/data/science/ao02/cat2/601/.  We could filter this ouput and generate a wget for this as

   wget -r https://heasarc.gsfc.nasa.gov/FTP/chandra/data/science/ao02/cat2/601/

Or perhaps we only want event files associated with this observation.  We can filter for event files with filterstring=*/*evt*.  We'd have to know enough about how these files were stored within the directory structure to define this string.  I.e., they'll be in a subdirectory (the initial */ so we look inside all subdirectories), and that they have the string evt in their name.

We can run the command:
java -jar ~/lib/users.jar table=chanmaster position='ty pyx' messages=none products noproducts=link,point filterstring='*/*evt*'
and we get this output

obsid|status  |name            |ra         |dec        |time               |detector|grating|exposure|type|pi       |public_date|__dp|__p_chanmaster_chandra_cxc_abs_root             |__p_chanmaster_chandra_cxc_abs_obsid|__p_chanmaster_chandra_obs_root
  601|archived|TY PYX (HD77137)|08 59 42.70|-27 48 58.7|2001-01-03T02:24:29|ACIS-S  |HETG   |  49760.|GTO |Canizares| 2002-02-21|5772|http://cda.harvard.edu/chaser/viewerContents.do?|                                 601|/FTP/chandra/data/science/ao02/cat2/601/

#  Product chandra.obs: Chandra Observation
  >  /FTP/chandra/data/science/ao02/cat2/601/primary/acisf00601N003_evt2.fits.gz
  >  /FTP/chandra/data/science/ao02/cat2/601/secondary/acisf00601_000N003_evt1.fits.gz
  >  /FTP/chandra/data/science/ao02/cat2/601/secondary/acisf00601_000N003_evt1a.fits.gz

#  Product chandra.cxc.abs: Chandra Proposal Abstract
  >  http://cda.harvard.edu/chaser/viewerContents.do?operation=propAbstract&obsid=601

Number of rows:    1
Number of columns: 16
We can use grep to extract the lines we want
java -jar ~/lib/users.jar table=chanmaster position='ty pyx' messages=none products noproducts=link,point filterstring='*/*evt*' | grep evt

which just returns a few lines that we could easily pop into a script.
  >  /FTP/chandra/data/science/ao02/cat2/601/primary/acisf00601N003_evt2.fits.gz
  >  /FTP/chandra/data/science/ao02/cat2/601/secondary/acisf00601_000N003_evt1.fits.gz
  >  /FTP/chandra/data/science/ao02/cat2/601/secondary/acisf00601_000N003_evt1a.fits.gz

There are a few things to keep in mind when using the CLI to access data products.  Remote data products are not filtered.  The Chandra abstract stored at the CXC is always returned regardless of how we filter the data.  When using the filterstring, you can get multiple files that match as we did here.  When data products are being queried the file system is searched to make sure they exist.  This can take a while if you are asking for hundreds or thousands of rows.  You can  define constraints on the query to break it up into more manageable pieces.


Settings Summary

This table lists key settings organized into groups.

Commonly used settings
Table sources
Table Local database table[s] for queries or correlation.
Upload VOTable[s] to query or correlate (also UploadData)
Sources List[s] of sources to use for positional queries (also SourcesData)
Union List of tables to query sequentially in union
UnionTable Table to be replaced by union tables
AntiCorr Table to be anticorrelated against
Divider Divider string for CSV/TSV/... style files.
HeaderLines Number of header lines in CSV/TSV/... style files.
Query control
Position One or more targets for cone search
Radius Radius for position or sources match (arcminutes)
Time Specify the time to be used in the query
TimeDelta Look for the given interval around an instant.
Coordinates Coordinate system to be used [also icoordinates and ocoordinates]
Offset Positional offset constraint between tables
Distinct Do not show repeated rows
Group Group outputs
Constraint SQL contraint to be satisfied by output rows
Sortvar Fields/expressions to sort output by
SkipQuery Don't run the query (with showquery or showcost)
TimeLimit Only run queries with shorter guesstimated times
Resolver Use the given resolver[s] to convert target names to coordinates
EndTimeOptional Allow endtimes to sometimes be null
OuterJoin Support an outer join on a table
Output control
Format The format of the result (votable|stream|aligned|Excel|FITS)
Saveas Save a table into the user's space
ShowOffsets Include offset columns in results
ResultMax Maximum number of rows to display
Fields Rows or expressions to be displayed
Products Include data products in the output
System Include system fields in the output
ShowAnti Show anticorrelating table name[s] in results
Generic Allow generic data products
ShowQuery Show the SQL query that will be sent to the database.
ShowCost Show a guesstimate on how long the query will take
TimeLimit Forgo/stop query if estimated/actual time exceeds limit.
User The name of the user account
Password The password for the user account
Account Manage the account
MyTables List the tables in the current account
Delete Delete a table from the current account

Settings Details

You can get help information in the batch interface itself by doing
    query help
    query help=subject
where subject is a particular topic or command. E.g., to get help on the table setting, just enter
    query help=table
The keyword is case-insensitive for all settings. Keyword values may be case sensitive depending on their context.

Details on all settings follow in alphabetical order.  This includes some settings that are not necessarily part of the batch interface but which are used within the Xamin system and included here so that there is a single place describing all settings.

Create or manage a user account.  Options exist to create, delete, activate or update user accounts.  Generally these requests should be managed the Xamin Web interface.

The name (or alias) of a table against which an anticorrelation is to be done. If an anticorrelation is desired against a correlation, then the correlation should first be performed save. No fields from the anticorrelating table may be requested as output fields.

The anticorrelation is done in two stages. In the first step the query is performed as a correlation, and all sets of matching rows in the tables other than the anticorrelating table are determined and saved. Then the query is re-executed and the anticorrelating table and any constraints that involve the anticorrelating table are removed. An additional constraint is added such that any row that was returned in the first stage is not included. Constraints that do not involve the anticorrelating table are included in both queries.

E.g., suppose we want to find all of the targets seen by both ROSAT and ASCA whose ROSAT observation time is longer than the ASCA SIS observation time. However we are not interested in those targets that have already been observed by Chandra with an exposure greater than 1000 seconds. This query might be set up as

In the initial query we find all of the nearby pairs of ROSAT and ASCA observations meeting our exposure constraints and which are near a Chandra observation with a long exposure. Next we run a correlation on just rosmaster and ascamaster where we keep the constraint on the ROSAT and ASCA exposures, but we drop any matches we found in the first query.


 tables=messier,ascamaster offset=a:b:10 anticorr=ascamaster
asks for the tables in the Messier catalog for which there is no ASCA observation within 10'.

The list of data formats available for a particular VO SIA or SSA service.  This is normally not used in the CLI interface.


The standard table to be used for bibcode queries.



One or more comma-separated bibcodes.  Only rows in the table which were in the associated article will be returned.   The search is accomplished by looking that the zzbib table (or the tables specified in the bibbads and bibtables fields).

   bibcode= 1924MNRAS...84..308E,1970ApJ...161L..77K


One or more tables to use to supplement the bibbase table in looking for bibliographic information.


This keyword is used to enable Xamin to more exactly emulate the output from Browse.



This setting determines how columns are filtered when presented to the user. There are three kinds of filtering: time formats, position format, and precision.  Times may be formated as either ISO or MJD, positions are formatted as sex (sexagesimal) or dec (decimal), and rounded to the specified precision or rendered to full machine precision (round/noround).  Different output formats have different defaults.  E.g., FITS would be dec,mjd,noround, while a text output would be iso,sex,round.  One, two or three filters may be specified.



This setting is to see compare an existing table with an SQL query.  It requires the SQLController.  Note that the query should specify an explicit ordering of the comparison may not work.  This is mostly used in the automated checking to look for changes in user generated tables but it is not in principle limited to that.

    sql=select * from messier where constell='AND'

An expression specifying a constraint that you wish to be satisfied on each output row. This can be any logical valued SQL expression.


   constraint='a.ra<3 or a.ra between 7 and 8'
Note that the quotes are not part of the argument. They are included to escape the shell processing of special characters like < or space.

This setting gives the class that should be used to handle the overall control for running the query.    When specifying a complete SQL statement (using the SQL keyword) then the SQLController should be used.   This is not available through the CLI interface can be used when accessing the HDB library directly (i.e., from within GSFC) and in the VO TAP interface to Xamin.



This setting is a link to the location of the HEASARC's coordinate converter service.  This is used in the single box interface when it is attempting to determine if tokens the user has entered might be coordinate names.  Normally this would not be used in the CLI interface.

Coordinates [ICoordinates,OCordinates]
The coordinate system used for position input and ouput. It can be overriden for either using the icoordinates or ocoordinates setting The coordinates should be a string of the format
(J|B|E)nnnn or Galactic or ICRS.
Here J indicates Julian coordinate, B, Besselian coordinates, and E, Julian Ecliptic coordinates. The nnnn is a numeric epoch.



The precision to which coordinate values are to be specified.  For decimal coordinates this is the number of decimal places to the values in degrees.  For sexagesimal coordinates
   0: Degrees
  1-2: Arcminutes
  3-4: Arcseconds
  5+: Arcseconds with CoordPrecision-4 decimal digits.
Right ascension values add one to the input value to accommodate the change in scaling.

might result in
    RA=10 13 22.22, Dec=-14 08 12.4 


The database (within the Postgres DBMS) to use for connections.



Within DataLink processing the id parameter input to the DataLink request is transformed into the DataLinkID setting.


To point to the top level products for the 13 row of the rosmaster table. 

Delete a user table. The user and password settings must also be specified. Note that the name of the table to be deleted does not include the user name.


    user=someuser delete=myfirsttable

For text output formats the delimiter between columns.  This normally is a pipe character '|'.  The special value, i.e., a backslash followed by a 't' may be used to use tabs.  This is in some sense the inverse of the divider setting which specifies the delimiter that is used when we read files.


Show only rows that differ. Since not all columns of a table or join are shown it is possible that two rows can be output with identical values. Setting this keyword suppresses such rows. This can be particularly useful if you are doing a cross-correlation, and want to get only the rows in one of the tables.


   query table=chanmaster,rosmaster offset=a:b:5
will show more than 30,000 results repeating many rows in chanmaster for each row in rosmaster that it matches.
   query table=chanmaster,rosmaster fields=a.standard offset=a:b:5 distinct
shows only the unique rows in chanmaster that participate in the query. Similarly
   query table=chanmaster,rosmaster offset=a:b:5 distinct \
shows the rows in rosmaster. Reversing the order of the tables would have had the same effect.

Distinct can also be used on single table queries. E.g., to get a list of ROSAT PI's

   query table=rosmaster field=pi_lname distinct
The separator that is used in a CSV/TSV/... style upload file. The default is a comma. The presence of this keyword indicates that uploads are in the CSV/... style. The special string "\t" may be specified to indicate tab dividers. Other dividers may need to be escaped appropriately for the users shell.



This setting is used to indicate a top level directory within which all legal downloads are expected to occur.

This keyword is used when a temporal query is made of a table that has start and end time fields, but where the end time may sometimes be null. In that case the start and end times will be considered to be the same. Otherwise all such rows will be eliminated from the results. This is commonly needed when querying master tables.

This specifies a comma separated list of people who should be informed when Xamin Feedback is submitted.

This keyword specifies the output rows to be shown. Possible values are
All standard rows for all participating tables.
All standard rows for table with alias X.
All rows of all participating tables.
All rows for table with alias X.
Column names
The name of a column (possibly including an alias)
An SQL expression
By default the standard fields for the first table are shown.


Show the standard columns for all tables.
Show all columns for the first table, none from the second and the standard columns from the third
Show the name, ra, and dec columns from the first table that has each field -- they migth even come from separate tables.
Show the given columns and expressions.

This internal setting is used in DataLink processing.


In the Xamin user interface, the FileNodeDirectory can be used to specify where to find files that define nodes in the Xamin tables tree structure.


This can be used to specify an XSL filter to be applied to the data being output as a VOTable  Since users cannot specify a local file name, this is mainly for special case use within the Xamin system.


When looking at data products a FilterString can be used to look for specific glob matches to find only certain files within data products.
Note that just as with globs in the file system, directory structure must be matched explicitly.  E.g., if we have a data product which is a directory and we want to find pha files which are in a subdirectory of that directory the filterstring might look something like '*/*pha*' not just '*pha*'.



To find only files that end with ".evt.gz" in the top directory


To find files that end with '.evt.gz' in subdirectories that start with 'p'.

The output format for the table. Currently supported formats are
The VOTable XML output.
A pipe delimited format which does not adjust the size of the fields to a constant allowing results to be streamed as they are received from the server
a pipe-delimited format similar to the current Batch format. The columns are padded to ensure that all columns are the same width.
An Excel XLS file.
A FITS binary table
An XML format designed for metadata queries.



When constructing URLs based upon the current URL, we may wish to use not the actual port being used in the servlet, but a port that was forwarded to it.  E.g., Apache may be forwarding some requests from port 80 to port 6080 to be handled by TOMCAT.  If the current port matches the forwarded port, then when creating new URLs use the original port.
This setting specifies two comma separated integers which are the original port, and the port that is being forwarded to.


Include generic products. Generic products are products that are not specifically linked to a table but can be linked to any table that has the appropriate data fields (typically RA and Dec). This includes NED, SIMBad and SkyView image services.   See Products


Include a GROUP BY  clause in the query.  Group clauses are used when we have functions which work on multiple rows, e.g., avg, max or count.  Users may specify the group fields explicitly or specify group=1 and the program will attempt to determine it automatically.  E.g., suppose we have an observation table with a pi_fname and pi_lname fields.  We want to find out how many observations each PI has:
    table=obstable fields=pi_fname,pi_lname,count(*) group=1
should do the trick.  Here the group=1 asks that we use all of the output fields that don't look like generic functions as the group fields.    So this is equivalent to

Note that SQL requires that all non-aggregate values be included in the group by clause.

The number of header lines in the CSV/TSV/... files. The first header line is assumed to be the column names separated by the specified (or default comma) divider. The remainder of the header lines are ignored.


Display help. With no arguments the entire help file is displayed. The values Summary, Overview and Examples as well as the individual setting keys may be used to get more specific help.


If the help setting is seen, no further processing is done.

The port we expect the HeraServer to be running on.


The coordinate system to be used for user input coordinates.  It has the same syntax as Coordinates.


In early stages of DataLink processing, the ID setting is used to hold one or more DataLink ID strings.


In DataLink processing the input URL may have been of the form
or event
    .../xamin/vo/datalink/table1/table2/DATALINKID with two or more tables identified in the URL.
These tables are ignored in terms of how the code that is used to process the URL, but are included in the IDPrefix in later DataLink processing.  This setting is not used in normal CLI processing.


In interpreting the input URL the current host may be indicated as LocalHost.  This setting is used to translate this hostname to an appropriate name for construction of URLs that will work generally.


An internal log file may be requested by specifying the Logging and LogFile arguments.  However this is intended for internal use not CLI users.


The level of logging requested.


The base URL for the Login service servlet to be used in creating and connecting to user sessions.


When users request to save a new table, the current size of their schema is compared against this value.  If it exceeds this, the query may fail.


This is a VO synonym for ResultMax and is used in some VO interfaces.


When rendering large table results the query will fail if the available memory falls below this limit.  Streaming formats are generally not affected by this limit.


The setting is used to indicate the kind of message handler that will be used for the query.  The message handler is used to convey information to the user additional to the actual query results.  Messages can include summaries of the tables involved in the query, warnings that limits on the query were exceeded, and information the user may have requested, including the SQL used in the query.  Normally class of the message handler is associated with a specific output format, but in some circumstances this can be overriden by the user.  Values currently used for this parameter are:

  • xml: write the messages as INFO elements in a VOTable XML
  • none: do not display messages
  • stream or error: stream messages to standard error
  • stdout: stream messages to standard output
  • json: return messages as JSON structure

The default depends upon the requested format of the query.


List the tables belonging to the user (the user and possibly password settings are required).


   User=someuser mytables


When products is specified without a value, then suppress linked tables or pointed (bibliographic) products. 



The first examples suppresses both linked table and bibliographic products.  The second examples suppresses only the links.  See Products and Generic


The _nullValues keyword is used internally to store values that the user has explicitly set to null as opposed to simply not specifying a value.   In some cases when applying defaults, the system will then not set the default and simply leave the keyword null.  This should not be used by users in the CLI interface.


The coordinate system to use for the coordinates rendered.  This uses the same syntax as coordinates.

This indicates that only rows where the difference in position between the two tables specified is less than the given value should be returned. The participating tables are indicated by their aliases. The three tokens in the offset are separated by colons.


    table=rosmaster,ascamaster offset=a:b:.01

This example does a spatial cross-correlation between rosmaster and ascamaster with a tolerance of 0.01 arcminutes. In principle a constraint could be used to specify this, but it is quite tedious to write out. If the third element of the offset triad is omitted, a default radius based on the first table is used. If ShowOffsets also specified, then a column will be generated giving the offset distance.

An outerjoin may be desired when you wish to query a table and include information from it, but not exclude results from a primary table even if there is no match in the secondary. E.g., suppose we have a table of objects some of which have been observed by Chandra. We may wish to include the Chandra exposure time when there is a Chandra observation. A simple join between the object table and the Chandra master table would exclude all objects not observed by Chandra. We can use outer joins to get around this restriction. The syntax of this value is
where the alias is the name of the table with which the outer join is being made, constraint includes all of the constraints of the table the join is being made against and type should be one of LEFT, RIGHT or FULL. It defaults to LEFT.

Outer joins are used in Xamin for index queries. A given table may or may not have a regime, mission or contain specific object types. We want to be able to perform queries that gather this optional information but include all tables regardless of whether it is present.


might be appropriate if our table has a field a.obs_no which contains the Chandra observation number or -1 if never observed.

This setting is the only argument that is actually processed on the client side in the users.jar.  It specifies the file to be written.  The special value '-' may be used to indicate standard output.  The same setting can be used when the HDB library is used directly but in that context the '-' special value is not supported.


Specify the password for the given user when the user account requires a password A password is required whenever the user keyword is required.

This setting is used in the Xamin Web interface to define a list of missions to be included in a Popular Missions category.

This is used to specify a search around one or more positions in the sky. Positions may be specified as targets or coordinates. More than one position may be specified and separated by semicolons within a single setting, or you can specify multiple position settings.

If more than one table is participating in the query, then it is assumed that a position refers to the first table with ra and dec fields.

If both a single position and showOffsets are specified, then an offset column will be shown. However if you want to see the offset for multiple inputs, then a source list should be used.


    position='10 13 47, 9 18 22'
    position='3c273;10 19 43, 10 14 03.1;9.27 14.24'
    position=3c273 position='a1656;a2156'
The first example specifies a single position. The second specifies three different positions to be queried. The last also specifies three different positions.

For the Xamin web interface this setting is used to specify one or more top level nodes to be rendered before the children of the root 'nd' node.  This is not used in the CLI interface.


Names of schemas which may not be deleted.


In the SxA VO interfaces this setting may be used to specify the requested output format.


It is possible for the template associated with a top level product to have multiple matches.  E.g., the top level products for the XTEINDEX table may match many different XTE observations.  When attempting to find the subproducts of such a top level product, we need to know which match was actually made.  This is done using the ProductRoot setting.  Normally this would not be used in CLI requests.

Include products information for the specified rows if products are supported for the chosen output format. This may be specified without a value in which case all top level data products for the requested tables are shown, or the user may specify a list of products to request. If an element in the list includes a dot (e.g. rosat.obs_pspc), then that specific product is searched for. If not, then the top level products in that hierarchy are looked for. E.g., products=rosat.obs_pspc,asca searches for a particular ROSAT product and any top level asca data product (a cross-correlation may have products from multiple missions).  See the discussion in section 12 for more information and examples.

When products is specified without an argument the value of the noproducts setting may be used to suppress linked table and bibliographic (pointed) products.  The generic keyword
is used to specify that generic (currently only positional) products should be included.


Show all top level products except generic products

products noproducts=link,point
Show only mission data products. Do not include linked tables, bibliographic (pointed) products or generic positional products

products generic
Show all products

Show the data product rosat.obs_im, the linke to the ascamaster table and any generic data products

This argument is used in the TAP interface to specify the SQL to be executed.


The radius to be used in positional, cone-search queries specified in arcminutes.  This may include database values.  The ability to set two values is used in discovery queries where we use a constant maximum numerical value which plays well with the indices, and a smaller value that varies from row to row.
Look within 15' of the specified position.

Require all rows to be withing 60' of the specified position and also within dsr arcminutes.  The table should be constructed such that dsr is never > 60.


If no radius is specified the query should fail.



This is normally set by the servlet and is used in logging.


Used within DataLink processing to indicate the specific product or products that have been requested.

Specify the name resolver or resolvers to be used in converting target names to coordinates. Resolvers include SIMBAD, NED and GRB. The default is to search all of those in that order and return the first match.
This argument gives the maximum number of rows to be returned. Normally the query will give an indication of whether additional rows are available if the limit is reached. (Internally the program asks for one more row that you do just in case your limit would match the number of rows returned with no limit). Values less than or equal to 0 are treated as no limit.


Save the query result as a table in the current space. An existing table cannot be overwritten (use Delete to remove it first). The name of the table does not include the user name prefix. After a table has been saved it can be used in subsequent requests as user.name. If the table already exists the results of the current query will be appended to the exising table so long as the output fields are compatible.


    user=someuser saveas=mytable

This setting is used to specify that specific settings files should be read. This is useful when the HDB library is being called directly but not when called through the CLI.

If an anticorrelation is being performed, then show the name of the table being anticorrelated against as the first column in the result. This is most commonly useful when the anticorrelation is being done against a union. This allows the user to effectively do anticorrelations against multiple tables in a single query while retaining information in the results about which table was NOT matched for each row of the results. Example:
This keyword (whose value is not used) specifies that a cost estimate (in CPU time) for the query should be given.


This keyword has no value. It adds columns to the end of the table which give the offsets between the tables in cross-correlations and the positional offset in a cone search if there was only a single position specified.
This keyword (whose value is not used) specifies that the SQL used in the query should be shown.
This keyword specifies that the query should not be run. Normally it would be used in conjunction with ShowCost or ShowSQL. It can also be useful if debugging a complex query since any expressions used in the query will be parsed and checked even though the query is not actually run.
This argument gives one or more expressions that should be used for sorting the results. While sort fields are normally columns, arbitrary SQL expressions may be used.

Note that if you are applying a row limit to the query (see ResultMax), then the entire query must be run and sorted before the first N rows are output.

A '+' or '-' may be suffixed to the expression to indicate an ascending (default) or descending order to the sort.

Instead of an expression, you can also give the column index to be queried. This may be required if you are doing a union query. The first column is indexed as 1.

The first example sorts the results from +90 to -90 degrees in declination. When two rows have the same declination the smaller RA is shown first.

The second example might sort a cross-correlation with the rows with the largest deviations in ra shown first.

The last example sorts by the second column in the output.


This argument is used with the SQLController to specify the SQL to be executed.  This is not available through the CLI interface.


In the Xamin Web interface this setting is used to specify nodes that will be displayed immediately after the children of the base 'nd' nodes.


In the SxA processing this setting is used to convey whether the user requested a format that is supported in the requested service.


When displaying columns, display internal system columns, __row and the unit vector columns if present.  If you are saving a result and wish to do positional searches on the results later, this may be desireable.



This setting is used in contexts where mail is sent to indicate the source host of the mail.


This setting is used in context where mail is sent to indicate a nominal user sending the mail.

Table is used to define one or more tables to be used in the query. Note that if multiple tables are specified, then a query is made of the join of the specified tables, not separate queries of each.


When multiple tables are included in a query, the tables and their columns are referred to by aliases. The alias for the first table is 'a', the second is 'b', and so forth. E.g., in the example above if we wish to place a constraint on the ASCA declination we would refer to it as b.dec.TableMetaData


This flag was used in earlier versions of Xamin to indicate that the VOTable output should be supplemented with non-standard metadata structures that could be used to include data product information.  The data product framework has been substantially revised to use DataLink requests, so this is no longer used, but may still affect VOTable outputs.

Time is used to specify a general time constraint on the query. Times are specified as interval1,interval2,... where the interval may be given as an instant or a range with two instants separated by the .. operator. An instant may be specified in either ISO format or as a JD or MJD time. The ISO format is somewhat flexible. Years may be specified as either two or four digits (years > 66 are assumed to be prefixed with 19, while years <= 66 are prefixed with 20). The months, days, hours, minutes and seconds may have either one or two digits. Seconds may have a decimal component. Seconds are optional when a time is specified. Note that the accuracy depends uses the Unix standard time conventions and is unlikely to accommodate leap seconds accurately.

Julian dates are assumed for numbers > 1000000 while MJD is used for smaller numbers.

When the user specifies an instant, the time range is expanded in both directions using the value of the timedelta setting. E.g., if you specify Time=24093 with timedelta at its default value of 1, the search will look for all data overlapping the range 24092.0 to 24094.0.

Some tables have a single time value rather than specifying an explicit range. For these tables matches occur when the table specified instant is within the user specified range. When the table specifies a range, a row is included if there is an overlap between the table interval and the user interval.


   time=50129.1..50289              An MJD range
   time=2450129.6..2450289.5        The corresponding JD range
   time=2000-10-1..2000-11-23       An ISO range
   time='00-10-01 10:12..00-10-01 10:15'  A three minute range in
             October of 2000.
   time=50129.283                   An MJD instant
   time=2004-12-19                  An ISO instant
   time=50029,50068,50093..50098    Three times.
Padding added to to both sides of a time instant when the user specifies a single time and not a time range. TimeDelta may be set to 0 which will return only rows where the time is within the range specified in the table row or is equal to the time specified in the table row when only a single time is given. The unit of TimeDelta is days. Note that the padding is added to both sides so that the total searched range is twice as large.

Example: (note 1/24=.04166666..)
   time=2000-10-10T12:00 timeDelta=0.041667 
This will match between 2000-10-10T11:00 and 2000-10-10T13:00 
A list of tables to be substituted for the UnionTable in a union query. The result of the query is the concatenation of all rows returned from all of the queries. Only columns that are common to all tables in the union (including the UnionTable) may be used in the query.

Unions may be used with anticorrelations but there are some limits. For a simple anticorrelation (A^B), both the source table and the anticorrelating table may be a union. However if more than two tables are involved in the anticorrelation not counting the union (AxB ^ C), then only the anticorrelating table may be a union.

If the anticorrelating table is a union, then the behavior of the anticorrelation is controlled by the presence of the ShowAnti setting. If this is not specified, then the union is performed before the anticorrelation is done. E.g., rows are returned only if no match is found in any of the tables in the union. If the ShowAnti keyword is shown, then a row is returned for each table in the union for which no row is found. E.g., if the setting is not present then the query is symbolically performed at A^(B|C|D) but if set, the
query is (A^B)|(A^C)|(A^D). Example:

This keyword indicates which table is to be replaced by the tables mentioned in the Union setting. If the UnionTable is not given, then the first table in the query is used. Example:
A file or list of files to be uploaded as temporary tables which may then be used within the query. Upload files may be VOTables, TDAT files or CSV/TSV/... style. If the files name ends with .tdat, then TDAT format is assumed. If the file name ends with .csv or the divider setting is specified then CSV/...format is assumed. Otherwise VOTable format is assumed.

For CSV files the format of the columns is inferred from the data. Columns whose values can be converted to integer are assumed to be integer are assumed to be of that type. Float is tried next, and if neither of these works the column is assumed to be a text string.

Fields in uploaded tables may be referenced in the query using aliases just as tables specified in the table setting. The aliases for the uploaded tables are always after the preexisting tables. E.g., in a query

     upload=sources.list,mydata.fits table=rosmaster
we have a three way correlation where the fields of the ROSMASTER table will use the alias a, the fields in the source list, sources.list, will used b, and the table info in the first extension of mydata.fits will use the alias c. Examples:
Specify the user when actions requiring persistence are requested. The settings SaveAs, Delete, MyTables require this setting. The appropriate password setting must also be given.



This setting is used in xamin/CurrentVersion to indicate the current Xamin version.


This setting is normally specified by Xamin servlets and passed to the HDB code where it is used as the base in constrution of other URLs.