== Tools in the Archive Module == <> === dk.netarkivet.archive.tools.ReestablishAdminDatabase === This is a tool which is only run when converting from a file based administration of ArcRepository to the database based administration of ArcRepository. It takes the admin.data file and puts it into the database. ==== Prerequisites ==== You need to have the external database running, the admin.data file must exist, and the tool must be run from the installation directory of the installation. ==== Usage ==== {{{ java -Ddk.netarkivet.settings.file=conf/settings_ArcRepositoryApplication.xml dk.netarkivet.archive.tools.ReestablishAdminDatabase [admin.data] }}} The optional argument [admin.data] is the path to the admin.data file. As default it is assumed that it is called 'admin.data' and it is located in the directory where the tool is run. It is therefore only necessarry if the admin.data is in another directory or called by another name (e.g. backups/admin.data or admin.data.backup). === dk.netarkivet.archive.tools.CreateIndex === This tool forces the IndexServer to create indices. This tool can be used for retrieving logs and cdx'es for previously completed harvestjobs before they are actual needed. This can be helpful if you want to improve the time it takes to generate Deduplication indices. ==== Prerequisites ==== You need to have a IndexServerApplication online. If you use HTTP as file transport method, you probably also need to override the settings.common.remoteFile.port in order to avoid conflicts (In the example below, we have set the port number to 5000). Furthermore all harvestjobs referred to in the CreateIndex commands must have metadata-1.arc files stored in the archive. ==== Usage ==== {{{ export INSTALLDIR=/fullpath/to/installdir export CLASSPATH=$INSTALLDIR/lib/dk.netarkivet.archive.jar cd $INSTALLDIR java -cp lib/dk.netarkivet.archive.jar -Ddk.netarkivet.settings.file=conf/settings_ArcRepositoryApplication.xml -Dsettings.common.applicationInstanceId=INDEX dk.netarkivet.archive.tools.CreateIndex -t dedup -l 2 }}} This requests a deduplication index based on the harvestjobs with id 2, and stores this index in $INSTALLDIR/DEDUP_CRAWL_LOG/2-cache === dk.netarkivet.archive.tools.GetFile === With this tool you can retrieve a file from your archive. ==== Prerequisites ==== If you want to use another arcrepositoryclient than the default (dk.netarkivet.archive.arcrepository.distribute.JMSArcRepositoryClient), you need to override the setting ({{{settings.common.arcrepositoryClient.class}}}). If you do use the default, you need to set the environmentName correctly, so your !ArcrepositoryApplication receives your GetFile request, and define your replicas, and the replicaId of the replica where you want to get the data. All this is most easily put into a local settings.xml: {{{ QUICKSTART SH bitarchive SHB SH }}} In the setting.xml above, the environment name have been set to QUICKSTART, you only have a single replica with replicaId=SH, and the Id of the replica where you want to get the data is "SH". ==== Usage ==== {{{ export INSTALLDIR=/fullpath/to/installdir export CLASSPATH=$INSTALLDIR/lib/dk.netarkivet.archive.jar export SETTINGSFILE=/home/user/conf/settings_ArcRepositoryApplication.xml java -Ddk.netarkivet.settings.file=$SETTINGSFILE -Dsettings.common.applicationInstanceId=GETFILE \ dk.netarkivet.archive.tools.GetFile 3-metadata-1.arc }}} If the file 3-metadata-1.arc exists in your SH replica, the file is downloaded from the archive, and written to the current working directory. If not, you are going to wait for a long time, until the arcrepository client times out. The tool has an optional second argument, which is a destination file: {{{ export INSTALLDIR=/fullpath/to/installdir export CLASSPATH=$INSTALLDIR/lib/dk.netarkivet.archive.jar export SETTINGSFILE=/home/user/conf/settings_ArcRepositoryApplication.xml java -Ddk.netarkivet.settings.file=$SETTINGSFILE -Dsettings.common.applicationInstanceId=GETFILE \ dk.netarkivet.archive.tools.GetFile 3-metadata-1.arc destination-file.arc }}} === dk.netarkivet.archive.tools.Upload === The tool "dk.netarkivet.archive.tools.Upload" allows one to upload ARC files to a repository of your choice. The type of arcrepository you are uploading your files to are defined by the setting {{{settings.common.arcrepositoryClient.class}}}, where the default is dk.netarkivet.archive.arcrepository.distribute.JMSArcRepositoryClient. This client uses JMS messages to communicate with a repository. ==== Prerequisites ==== If you use the client dk.netarkivet.archive.arcrepository.distribute.JMSArcRepositoryClient, you need to ensure, that you send upload requests to the correct JMS queue, and that you receive the responses from the client. This is ensured by setting the setting {{{settings.common.environmentName}}} to the proper value (e.g. PROD or DEV). The same holds for the setting {{{settings.common.applicationName}}} (e.g. Upload), and finally "settings.common.applicationInstanceId" (e.g. ONE or TWO) If you intend to override any of the settings mentioned above, you can either do the overrides on the commandline or writing the overrides to a settings file. ==== Using the tool ==== This tool will upload a number of local files to all replicas in the archive. An example of an execution command is: {{{ java -Ddk.netarkivet.settings.file=/home/user/conf/settings_ArcRepositoryApplication.xml \ -cp lib/dk.netarkivet.archive.jar -Dsettings.common.applicationInstanceId=UPLOAD\ dk.netarkivet.archive.tools.Upload \ file1.arc [file2.arc ...] }}} where file1.arc [file2.arc ...] is the files to be uploaded This will cause the files to be uploaded. The behaviour of the default client (JMSArcRepositoryClient) is furthermore, that if a file is uploaded successfully, it is deleted locally. This means that if there are files left after execution, these files are not known to be stored safely. If several independent uploads are performed at the same time, they should have different value for the 'settings.common.applicationInstanceId' (e.g. UPLOAD1, UPLOAD2, ...). === dk.netarkivet.archive.tools.GetRecord === This tool takes a CDX based lucene-index, and an URI, and retrieves the corresponding ARC-record from the archive, and dumps it to stdout. ==== Prerequisites ==== The same as for getFile. ==== Usage ==== {{{ export INSTALLDIR=/fullpath/to/installdir export CLASSPATH=$INSTALLDIR/lib/dk.netarkivet.archive.jar export SETTINGSFILE=/home/user/conf/settings_ArcRepositoryApplication.xml export LUCENE_INDEX=/tmp/cache/DEDUP_CRAWL_LOG/1-cache export URI=http://www.netarkivet.dk java -Ddk.netarkivet.settings.file=$SETTINGSFILE -Dsettings.common.applicationInstanceId=RECORD \ dk.netarkivet.archive.tools.GetRecord $LUCENE_INDEX $URI }}} If the URI is not in the given index, an exception is sent to stdout with the message: ''Resource missing in index or repository for URI'' TODO: Mention how to make an luceneindex for your stored arcfiles. === dk.netarkivet.archive.tools.RunBatch === The bitarchives are designed to receive batch-programs to run on all the arc-files stored in the bitarchive. This is true no matter whether the bitarchive is installed as a local arc-repository or a distributed repository with several bitarchives. Batch programs are also used internally by the !NetarchiveSuite software to do specific tasks like getting a CDX'es for a specific job, or checksums of arc-files stored in the bitarchive, or lists of arc-files from the bitarchive. The !RunBatch program is used to send your own batchjobs to the bitarchives. Note that a batchjob will only be sent to one bitarchive replica! It is not possible to send batchjobs to checksum replicas. Only bitarchive replicas can handle batchjobs. ==== Prerequisites for running a batch job ==== A number of prerequisites must be taken care of before a batch job can be executed. These are: * ''Setting file:'' '' . must be present and must include declarations of at least the following setttings: '' '' * Replicas to identify the replica you want to communicate with: '' '' * ~+{{{settings.common.replicas}}}+~ in order for the batch program to identify and messages to the bitarchive. '' '' * ~+{{{settings.common.useReplicaId}}}+~ in order to determine default bitarchive replica to use. '' '' * Channel settings to be able to make channel names to communicate with running system: '' '' * ~+{{{settings.common.environmentName}}}+~ (typically PROD) '' '' * ~+{{{settings.common.applicationName}}}+~ (!RunBatchApplication, but currently set automatically) '' '' * Other settings related to communication where the running systems settings differs from default. '' '' * ''Batch program:'' '' '' . The batch program must be designed as a Java class that extend ARCBatchJob or !FileBatchJob depending on whether you want to make a batch program over arc records or a batch program over files. '' '' * ''Call location:'' '' '' . The !RunBatch program can be started from any of the machines in the distributed system where the system runs. '' '' * ''Disk space requirement on bitarchive:'' '' '' . The disk space needed will depend on the batch program concerned. As an example the !ChecksumJob produces about 100 bytes per arc-file, whereas a batch program writing the full contents of arc-files would require as much space as the archive it self. '' '' * ''Class Path:'' '' '' . Running !RunBatch requires ~+{{{lib/dk.netarkivet.archive.jar}}}+~ in the class path '' '' * ''Memory space on bitarchive:'' '' '' . The memory space needed will depend on the written batch program. If the batch program is written using a lot of jar files, these files will be needed to be kept in memory while the batch program is running, and on top of that comes the memory requirenments for the batch job it self. '' '' * ''Timeout on bitachive monitor:'' '' '' . To set an specific timeout for a concrete BatchJob, its needed to override 'protected long batchJobTimeout = -1;' in FileBatchJob.java. Otherwise the default timeout is 14 days. '' '' ==== Execution and Arguments ==== The execution of a batch program is done by calling the ~+{{{dk.netarkivet.archive.tools.RunBatch}}}+~ program with the following arguments: '' '' If the batch program is given in a single class file, this must be specified in the parameter: '' '' * ~+{{{-C}}}+~ is a file containing a !FileBatchJob/ARCBatchJob implementation '' '' If the batch program is given in one or more jar files, this must be specified in the parameters: '' '' * ~+{{{-N}}}+~ is the name of the primary class to be loaded and executed as a !FileBatchJob/ARCBatchJob implementation '' '' * ~+{{{-J}}}+~ is on or more files containing all the classes needed by the primary class. The files must be comma separated. '' '' To specify which files the batch program must be executed on, the following parameters may be set optionally '' '' * ~+{{{-B}}}+~ is the name of the bitarchive replica which the batchjob must be executed on. The default is the name of the bitarchive replica identified by the setting ~+{{{settings.common.useReplicaId}}}+~. Note that it is the replica name and not replica id which are refered to here. Also it cannot be the name of a checksum replica, since batchjob can only be executed on bitarchive replicas. '' '' * ~+{{{-R}}}+~ is a regular expression that will be matched against file names in the archive. The default is ~+{{{.*}}}+~ which means it will be executed on all files in the bitarchive replica. '' '' To specify output files from the batch program, the following parameters may be set optionally '' '' * ~+{{{-O}}}+~ is a file where the output from the batch job will be written. By default, it goes to ~+{{{stdout}}}+~, but it will be mixed with other output to ~+{{{stdout}}}+~. '' '' * ~+{{{-E}}}+~ is a file where the errors from the batch job will be written. By default, it goes to stderr. '' '' An example of an execution command is: '' '' {{{ }}} which will take in ~+{{{lib/dk.netarkivet.archive.jar}}}+~ in the class path and execute the general !NetarchiveSuite program ~+{{{dk.netarkivet.archive.tools.RunBatch}}}+~ based on settings from file ~+{{{/home/user/conf/settings_ArcRepositoryApplication.xml}}}+~. This will result in running the batch program ~+{{{FindMime.class}}}+~ on the bitarchive replica named ~+{{{ReplicaOne}}}+~, but only on files with names matching the pattern ~+{{{10-*.arc}}}+~. The results written by the batch program is concatenated and placed in the output file named ~+{{{resfile}}}+~. '' '' It is always a good idea to use a unique value for the 'settings.common.applicationInstanceId' for each batchjob (especially when running multiple batchjobs at once), since other applications could be listening on the same JMS-channels and thus receive the reply-message. This has previously caused confusion, since the application, which receives the batchjob reply, did not expect the reply and thus ignores it, causing the tool to never receiving the reply and thus never finish. '' '' ==== Example of packing and executing a batch job ==== To package the files do the following: '' '' ~+{{{jar -cvf batchfile.jar path/batchProgram.class}}}+~ '' '' where ~+{{{path}}}+~ is the path to the directory where the batch class files are placed. This is under the ~+{{{bin/}}}+~ directory in the eclipse project. The ~+{{{batchProgram.class}}}+~ is the compiled file for your batch program. '' '' The call to run this batch job is then: '' '' {{{ }}} where ~+{{{path}}}+~ in the ~+{{{-N}}}+~ argument has all ~+{{{'/'}}}+~ changed to ~+{{{'.'}}}+~. '' '' E.g. to run the batch job from the file ~+{{{myBatchJobs/arc/MyArcBatchJob.java}}}+~, which inherits the ARCBatchJob class (~+{{{dk/netarkivet/common/utils/arc/ARCBatchJob}}}+~), do the following. '' '' * ~+{{{cd bin/}}}+~ - Place yourself in the bin/ folder under your project. '' '' * ~+{{{jar -cvf batch.jar myBatchJobs/arc/*}}}+~ - Package the compiled Java binaries into an .jar file. '' '' * ~+{{{mv batch.jar ~/NetarchiveSuite/.}}}+~ - Move the packaged batch job to your NetarchiveSuite directory. '' '' * ~+{{{cd ~/NetarchiveSuite/}}}+~ - Go to your NetarchiveSuite directory. '' '' * Run the following command to execute the batch job: '' '' {{{ }}} The ~+{{{lib/dk.netarkivet.common.jar}}}+~ library need to be included in the classpath since the batch job (~+{{{myBatchJobs/arc/MyArcBatchJob}}}+~) inherits from a class within this library (~+{{{dk/netarkivet/common/utils/arc/ARCBatchJob}}}+~). '' '' ==== Security ==== If the security properties for the bitarchive (independent of this execution) are set as described in the [[Configuration Manual 3.14#ConfigureSecurity|Configuration Manual]] the batch program will not be allowed to: '' '' * to write files to the bitarchive '' '' * to change files in the bitarchive '' '' * to delete files in the bitarchive '' '' ==== Outstanding Issues ==== As it will be described in [[AssignmentGroupB4|archive assignment B.2.4]], there are plans to make logging of internal logs and exceptions from batch jobs better. The only way to code internal logging today is to write to the output of the batch program. '' '' If the batch job today is running at a time when not all bitarchives (i.e. !BitarchiveApplications) are online, this will not be discovered by the !NetarchiveSuite software. As described in [[AssignmentGroupB2|archive assignment B.2.2]] there are plans to change this at a later stage. '' ''