Overpass API >

Complete installation guide

This is the complete installation guide. If it looks too complex, please have a look at the quick installation guide. There is also an installation guide on the wiki which covers most errors that have ever been occured during installation and startup.

We cover here various base systems, in particular at least Ubuntu Linux and FreeBSD. We also cover different variants of installation and operation, including working with or without meta data, the XAPI wrapper, area creation, and the management of custom output.

Requirements

With a POSIX confirming operating system (this includes all kinds of Linux as well as FreeBSD, OpenBSD and several others), you have already fulfilled most base requirements.

Concerning hardware, I suggest at least 4 GB of RAM. The more RAM is available, the better, because caching of disk content in the RAM will significantly speed up Overpass API. The processor speed will have little relevance. For the hard disk, it depends on what you want to install. A full planet database with minutely updates should have at least 250 GB of hard disk space at disposal. Without minute diffs and meta data, 100 GB would already suffice.

To automatically download diffs files, you need a command line download tool. I suggest wget. If it is not already installed, you can get it on e.g. Ubuntu with:

sudo apt-get install wget

Other useful programs could be curl or fetch (fetch is available by default on FreeBSD). To completely replace wget, you need to replace wget -O by curl -o in all installation instructions here and in each of the files src/bin/fetch_osc.sh, src/cgi-bin/ping, and src/cgi-bin/template inside the block fetch_file(). The same applied with fetch: In this case, replace wget -O by fetch -o.

To compile the software, you need a C++ compiler and make. I suggest the GCC collection. If it is not already installed, you can get it on e.g. Ubuntu with:

sudo apt-get install g++ make

To compile the software, you also need the expat library. If it is not already installed, you can get it on e.g. Ubuntu with:

sudo apt-get install expat libexpat1-dev zlib1g-dev

You can also include expat from sources; this way you don't need root permissions just to install expat: Download the latest tarball from the project's page. Expat itself is installed by unpacking, then configure; make; make install. To use these libraries, insert CPPFLAGS="-I/path/to/expat/include" and LDFLAGS="-static -L/path/to/expat/lib/" into the make command:

make CPPFLAGS="-I/path/to/expat/include" LDFLAGS="-static -L/path/to/expat/lib/" install

where /path/to/expat must be replaced by the path that you have chosen in the configure step of expat. Note: If you need to supply more than one CPPFLAGS parameter this way, you should instead use one CPPFLAGS parameter which has both the arguments inside the quotation marks, separated by a blank.

Software Installation

You need to choose a directory where you put the executable files. You can later move them to a different directory. But the default choice of the installation program automake, /usr/bin, requires root permissions, although no root permissions are really necessary to run the program. I suggest subdirectories of the source code directory: this can be achieved with "`pwd`". To configure this output directory and detect necessary adaptions of your system, run:

./configure --prefix="`pwd`"

Generate the executables:

make

Other system than Linux may require here some extra parameters. For example, FreeBSD needs -DNATIVE_LARGE_FILES, because it doesn't have a separate open64 function:

make CPPFLAGS="-DNATIVE_LARGE_FILES"

Fast Startup

Since version 0.6.98, the database can be cloned from an exisiting instance rather than created from scratch. This only takes 4 to 8 hours in comparison to 24 to 48 hours for an update from the planet file. Note that this feature is still rather experimental - please report any problems by eMail to me (roland.olbricht at gmx.de). If you don't want the entire planet or prefer a manually planet import for some other reason, use the manual import instead.

Download a clone of the database at dev.overpass-api.de with the command:

bin/download_clone.sh --source=https://dev.overpass-api.de/api_drolbr/ --db-dir="db/" --meta=no

or

nohup bin/download_clone.sh --source=https://dev.overpass-api.de/api_drolbr/ --db-dir="db/" --meta=no &

If you want meta data, use --meta=yes instead of --meta=no. If you want museum data (since 2012) then use --meta=attic. This downloads about 40 GB (70 GB with meta data, 170 GB with museum data) in several compressed files and uncompresses them to a ready-to-use database.

Now you can proceed with minute updates.

Startup

The standard use case is to set up the database with the whole planet data and including meta data. If you haven't downloaded an OSM XML planet file yet, you can fetch one for example with:

wget -O planet.osm.bz2 "https://ftp.heanet.ie/mirrors/openstreetmap.org/planet/planet-latest.osm.bz2"

This file has a size of about 60 GB. Thus, depending on your internet connection, it may take between 4 hours (fastest possible) and 66 hours (with 2 MBit) to download the file. If you are not working on your local machine, you may want the download to continue even if you logout. Use nohup for this:

nohup wget -O planet.osm.bz2 "https://ftp.heanet.ie/mirrors/openstreetmap.org/planet/planet-latest.osm.bz2" &

Once you have the file, you can start the import. The import again may take up to 48 hours:

bin/init_osm3s.sh planet.osm.bz2 "db/" "../" --meta

or

nohup bin/init_osm3s.sh planet.osm.bz2 "db/" "./" --meta &

You may need to adapt the parameters: The first parameter planet.osm.bz2 is the osm file to process, the second parameter "db/" is the directory where the database should go to, and the third parameter "./" is the base directory of the executables, i.e. there must exist update_database in the subdirectory bin of the location where the third parameter points to.

You can also use any other osm file. If you want to save half of the hard disk space and reduce the startup and update time by up to two thirds, you can omit meta data by omitting the --meta parameter.

When this command is done, it writes Update complete. to the console (or to the file nohup.out if you have used nohup). At this point, the database can be used.

Minute Updates

The following steps are only needed if you want minutely updates. In this case, run the following commands:

nohup bin/dispatcher --osm-base --meta --db-dir="db/" &
chmod 666 "db/osm3s_v0.7.55_osm_base"

(without --meta if you have not processed meta data)

The dispatcher has been successfully started if you find a line "Dispatcher just started." with correct date (in UTC) in the file transactions.log in the database directory.

nohup bin/fetch_osc.sh id "https://planet.openstreetmap.org/replication/minute/" "diffs/" &

This should start to fill the directory "diffs/" with subdirectories which have three digits as name and finally contain files ending in osc.gz and state.txt.

nohup bin/apply_osc_to_db.sh "diffs/" id --meta=yes &

(with --meta=no instead if you have not processed meta data or --meta=attic if you want to work with museum data)

These commands don't make sense without nohup, because the programs become daemons and never terminate. Once again, you need to replace parameters: you always need to replace id by the replicate id to start from. If you have obtained your database by cloning, you find the replicate id in the file replicate_id in the database directory. If you have imported the database from an OSM file, search on https://planet.openstreetmap.org/replication/minute/ with your browser for the last replication diff that has been created before the planet creation date.

The other parameters need only to be adapted if you have chosen a different directory in a previous step: "db/" is the directory of the database, "https://planet.openstreetmap.org/replication/minute/" is the replicate diff's remote source, and "diffs/" is the directory where the minute diffs are stored until they have been applied.

Congratulations! Now you have a database mirror that can serve the entire world and is always only a few minutes behind the OSM main database. We can now startup the additional modules:

Attach to Apache

To make your instance public visible, you need to make it accessible by a web server. We show here how to do this with the Apache Server. Overpass API also works with every other web server that offers CGI. For example, it runs on https://overpass.openstreetmap.ru/cgi/ with nginx.

You need to edit Apache's configuration file and therefore you do need root permissions to do so.

Apache is configured with the file /etc/apache2/httpd.conf. My configuration file looks, in simplied form, as follows:

ServerName www.overpass-api.de

LogLevel info
DocumentRoot /path/to/osm-3s_v0.7.55/html/

ScriptAlias /api/ /path/to/osm-3s_v0.7.55/cgi-bin/
<Directory "/path/to/osm-3s_v0.7.55/cgi-bin/">
  AllowOverride None
  Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
  Order allow,deny
  Allow from all
</Directory>

The essential part is to replace all occurences of the path /path/to/osm-3s_v0.7.55/ to the real pathes. This configuration file tells Apache to serve the html files from the directory /path/to/osm-3s_v0.7.55/html/ and to call programs in /path/to/osm-3s_v0.7.55/cgi-bin/ by CGI. The ScriptAlias makes them visible externally as /api/ instead of /cgi-bin/. For the remaining options, please look into the Apache documentation.

You need to check whether the involved directories and their parent directories have sufficient permissions for all users, because otherwise Apache (with its proxy user www-data) cannot access them:

chmod 755 /path
chmod 755 /path/to
chmod 755 /path/to/osm-3s_v0.7.55
chmod 755 /path/to/osm-3s_v0.7.55/html
chmod 755 /path/to/osm-3s_v0.7.55/bin
chmod 755 /path/to/osm-3s_v0.7.55/cgi-bin
chmod 755 /path/to/osm-3s_v0.7.55/db

Some directories are added later for some of the optional modules.

You can now (re-)start Apache to let the updated configuration come into effect:

sudo apache2ctl graceful

The XAPI Wrapper

The XAPI wrapper delivers the XAPI compatibility layer. No changes to the Apache cofiguration or to the database are necessary.

Area creation

To use areas with Overpass API, you essentially need another permanent running process that generates the current areas from the existing data in batch runs.

First, you need to copy the rules directory into a subdirectory of the database directory:

cp -pR "rules" "db/"

The next step is to start a second dispatcher that coordinates read and write operations for the areas related files in the database:

nohup bin/dispatcher --areas --db-dir="db/" &
chmod 666 "db/osm3s_v0.7.55_areas"

The dispatcher has been successfully started if you find a line "Dispatcher just started." with correct date (in UTC) in the file transactions.log in the database directory.

The third step then is to start the rule batch processor as a daemon:

nohup bin/rules_loop.sh "db/" &

Now we don't want this process to impede the real business of the server. Therefore, I strongly suggest to priorize this process down. To do this, you need to find with

ps -ef | grep rules

the PIDs belonging to the processes rules_loop.sh and ./osm3s_query --progress --rules. Run for each of the two PIDs the commands:

renice -n 19 -p PID
ionice -c 2 -n 7 -p PID

The second command is not available on FreeBSD. This is not at big problem, because this rescheduling just means giving hints to the operating system.

When the batch process has completed its first cycle, all areas get accessible via the database at once. This may take up to 24 hours.

Managing custom output

To make the custom output feature operational, you only need to copy the default templates into the corresponding subdirectory of the database:

cp -pR "templates" "db/"

No runtime component or change in the Apache configuration is needed.