= Appendix C: Easy Installation of NetarchiveSuite = <> * Verify that you have all the needed software installed by installing the !QuickStart according to https://netarchive.dk/suite/Quick_Start_Manual_3.12 e.g. in /home/test/netarchive by starting the Quickstart. * Shutdown the !QuickStart according to the !QuickStart Manual * Download following attached files to e.g. /home/test/netarchive: * Download following attached files to e.g. /home/test/netarchive: [[attachment:RunNetarchiveSuite.sh]] [[attachment:deploy_standalone_example.xml ]] The first script is a simple script for doing all the steps during deployment. It takes a !NetarchiveSuite package ('.zip'), a configuration file (the second file), and a temporary installation directory as arguments (in the given order). In the configuration file all the applications are placed on one machine (e.g. the current machine, ~+{{{localhost}}}+~). This gives the same kind of instance as the !QuickStart. If run directly it is installed and run from the directory ~+{{{/home/test/USER}}}+~. Below, you find other deploy examples. ( They have to be modfied to your environment) E.g. {{{ cd /home/test/netarchive bash RunNetarchiveSuite.sh NetarchiveSuite.zip deploy_standalone_example.xml USER/ #if you have not setup your ssh keygen correctly, you need to login some times before the installation finish successfully }}} The script creates a "USER" folder in e.g. /home/test , which contains e.g. methods for starting and stopping NetarchiveSuite and starts the whole NetarchiveSuite. * Set your browser to proxy according to the QuickStart Manual on port 8070 * Choose the URL e.g. http://dia-test-int-01.kb.dk:8074/HarvestDefinition/ * You can now create, run and browse according to the QuickStart - or User Manual == Examples of deploy configuration files == In the following are two examples of configuration files for deploy. The first two requires adaptation to your own system before use. [[attachment:deploy_distributed_example.xml ]] The instance with two replicas divided over two physical locations. Each physical locations contain several machines. Bitarchive machines, harvester machine and viewerproxy machine. Only one physical location has an administator machine, which contains the GUI application, the Bitarchive monitors and the arc repository. ---- . [[attachment:deploy_distributed_example_single.xml ]] This is the instance with only one replica and one physical location. It is very close to the first example, just with one replica removed. ---- . [[attachment:deploy_distributed_example_database.xml ]] This is an instance using the archive database for the !ArcRepository and the !DatabaseBasedActiveBitPreservation. It contains a checksum replica, and it does not use admin.data. ---- == A running HW/SW setup example from June 2009 for Netarkivet.dk == ---- . [[attachment:HW_SW_production_example.txt ]] ---- == How to add a harvester more on the same machine and set all to HIGHPRIORITY selective harvesting == Using e.g. deploy_example.xml * Duplicate the existing harvester definition within In the new duplicate harvester config, change all following duplicate values to new unique values within : * * and * and * harvester_high_2 and set * HIGHPRIORITY e.g.: . . . . high2 . . 8112 . 8212 . . . HIGHPRIORITY . 8192 8193 . controlRole . R_D . harvester_high_2 == How to configure which Heritrix report has to be uploaded in the metadata ARC file == Three settings properties control which heritrix reports are added to the metadata ARC file: - ''settings.harvester.harvesting.metadata.heritrixFilePattern'' is a java pattern that allows you select which files in the crawl dir (not recursively) to include in the metadata ARC. - ''settings.harvester.harvesting.metadata.reportFilePattern'' is also a java pattern that controls which subset of the files selected by heritrixFilePattern are to be considered as report files. All the other files will be considered as setup files. - ''settings.harvester.harvesting.metadata.logFilePattern'' is a third java pattern that controls which files in the logs subdirectory of the crawldir are to be added as log files to the metadata ARC.