| 1 | = Eagle and Hawk Reprocessed data = |
| 2 | |
| 3 | So you have to reprocess data... Let's see if we can help you out with this. Good luck! |
| 4 | |
| 5 | |
| 6 | == Acessing the Data == |
| 7 | |
| 8 | First of all, if the data is not in our system, download the data from CEDA: https://data.ceda.ac.uk/neodc/arsf/ |
| 9 | The password is here: https://rsg.pml.ac.uk/intranet/trac/wiki/Projects/ARSF-DAN/Passwords |
| 10 | |
| 11 | Ideally there would be RAW data available and you can process this dataset as any other. If not, then download the leve1b files and hdf files delivered. You will also need to download the log file and any documentation you can find. |
| 12 | |
| 13 | |
| 14 | == Create project directory and setup== |
| 15 | Create a project directory where in appropriate location: </users/rsg/arsf/arsf_data/year/flight_data/country/project> |
| 16 | |
| 17 | Choose an appropiate project name <ProjCode-year-_jjjs_Site> and create the project directories as "arsf" user by using |
| 18 | `build_structure.py -p . ` |
| 19 | Then change to user 'airborne' and create the missing directories. If the year is not found in folder structure, you can build_structure from another year and simply edit manually the directories or move them. |
| 20 | |
| 21 | Once the structure is created, copy the level1b files to processing/hyperspetral/flightlines/level1b/ |
| 22 | |
| 23 | If there is any project info needed not found in the log. You might be able to get from the hdf files. |
| 24 | First of all, get the hdf code activated for you |
| 25 | `source ~utils/python_venvs/pyhdf/bin/activate` |
| 26 | |
| 27 | Now you can get extra info from hdf files for example like this: |
| 28 | `get_extra_hdf_params.py e153081b.hdf ` |
| 29 | |
| 30 | You might need to edit the missing information for some of the scripts. One example is: |
| 31 | {{{ |
| 32 | --base="Almeria" --navsys="D.Davies" --weather="good" --projcode="WM06_13" --operator="S. J. Rob" --pi="R. Tee" --pilot="C. Joseph" |
| 33 | }}} |
| 34 | |
| 35 | If the project is not in the database or the processing status page, it is better to include it (just follow the usual steps for unpacking a new project). Otherwise, many of the commands below will need that extra metadata information or might simply fail. |
| 36 | |
| 37 | |
| 38 | == Extracting navigation and setting up== |
| 39 | |
| 40 | Inspect the level1b hdr files. For running APL the eagle and Hawk need the binning and "x start" in the hdr as well as "Acquisition Date", "Starting time" and "Ending time". You can get some of that info by running |
| 41 | |
| 42 | `hdf_reader.py --file e153031b.hdf --item MIstime` |
| 43 | Where MIstime is the Starting Time that will be returned in format `hhmmss`. A list of keywords are available in the az guide found under az_docs. You will also need (if not present in the data) the keywords --MIdate and --MIetime. You need to calculate manually the binning and the x start following visual inspection of the hdf lines in tuiview as that information is usually not saved in the hdf file. |
| 44 | |
| 45 | It is possible that the level1b hdr files contain errors as well. Please double check the most common errors: |
| 46 | -Sensor name and sensor id for Hawk is showing details for Eagle. Correct manually for: 300011, sensor type = SPECIM Hawk. |
| 47 | -reflectance scale factor = 1000.000 -> Should be -> Radiance data units = nW/(cm)^2/(sr)/(nm) |
| 48 | |
| 49 | An example of corrected file should look like this: |
| 50 | {{{ |
| 51 | binning = {2, 2} |
| 52 | x start = 27 |
| 53 | acquisition date = DATE(dd-mm-yyyy): 02-06-2006 |
| 54 | GPS Start Time = UTC TIME: 10:58:19 |
| 55 | GPS Stop Time = UTC TIME: 11:03:49 |
| 56 | wavelength units = nm |
| 57 | Radiance data units = nW/(cm)^2/(sr)/(nm) |
| 58 | }}} |
| 59 | |
| 60 | An example of corrected file should look like this: |
| 61 | {{{ |
| 62 | binning = {2, 2} |
| 63 | x start = 27 |
| 64 | acquisition date = DATE(dd-mm-yyyy): 02-06-2006 |
| 65 | GPS Start Time = UTC TIME: 10:58:19 |
| 66 | GPS Stop Time = UTC TIME: 11:03:49 |
| 67 | wavelength units = nm |
| 68 | Radiance data units = nW/(cm)^2/(sr)/(nm) |
| 69 | }}} |
| 70 | |
| 71 | On the next step, you need to **create the nav files** by extracting the navigation information from the hdf files. Run the following command for each line (remember to activate pyhdf above): |
| 72 | {{{ |
| 73 | hdf_to_bil.py processing/hyperspectra/flightlines/level1b/hdf_files/e153101b.hdf processing/hyperspectra/flightlines/navigation/interpolated/post_processed/e153101b_nav_post_processed.bil |
| 74 | }}} |
| 75 | |
| 76 | Please note that it is the original navigation rather than `1b_nav_post_processed` but the scripts will be looking for that keyword. Rename the files if necessary to match the naming format. |
| 77 | |
| 78 | If you have extracted the navigation, then you will be able to automatically create a DEM model by specifying the BIL navigation files directory and running: |
| 79 | `create_apl_dem.py --aster -b ./processing/hyperspectral/flightlines/navigation/interpolated/post_processed/` |
| 80 | |
| 81 | |
| 82 | And you can create the specim config file giving specific information |
| 83 | {{{ |
| 84 | generate_apl_config.py --base="Almeria" --navsys="D.Davies" --weather="good" --projcode="WM06_13" --operator="S. J. Rob" --pi="R. Tee" --pilot="C. Joseph" --hdfdir processing/hyperspectral/flightlines/level1b/hdf_files |
| 85 | }}} |
| 86 | |
| 87 | **Please note you will not be able to process the data with the config file if there were no RAW data**. The script will print lots of errors but it will create a config file that will allow you to create the xml files much easier at the last stage. The config file will have all the metadata for each line including altitude (for the pixelsize calculator) and should have the UTM zone for the projection. However, the config file will think that all files are CASI; rename them and make sure all files are matching the Eagle and Hawk fligthlines. Doble check and complete all information (like any other airborne request) including projection, DEM, pixel size, bands to map... |
| 88 | |
| 89 | |
| 90 | |
| 91 | == Processing flightlines == |
| 92 | As there is no raw data, you will need to run each apl command manually. Simply, create a python script that goes over the files and runs each step: aplcorr, apltran, aplmap and aplxml. An example of the aplcoorr is: |
| 93 | {{{ |
| 94 | aplcorr -vvfile ~arsf/calibration/2006/hawk/hawk_fov_fullccd_vectors.bil \ |
| 95 | -navfile processing/hyperspectral/flightlines/navigation/interpolated/post_processed/h153{:02}1b_nav_post_processed.bil \ |
| 96 | -igmfile processing/hyperspectral/flightlines/georeferencing/igm/h153{:02}1b_igm.bil \ |
| 97 | -dem processing/hyperspectral/dem/WM06_13-2006_153-ASTER.dem \ |
| 98 | -lev1file processing/hyperspectral/flightlines/level1b/h153{:02}1b.bil \ |
| 99 | >> processing/hyperspectral/logfiles/h-{:02}-log |
| 100 | }}} |
| 101 | The output of the script will be directly saved on the logfile "h-{:02}-log" allowing later on to create xml files. All the APL commands must only show once in the config file so therefore, you should check the output and make sure the lines can be processed before doing a loop over all lines. If something went wrong, you can simply delete the log file and create a new one. Otherwise, you will have to delete the extra commands manually. |
| 102 | |
| 103 | An example for running apltran is: |
| 104 | {{{ |
| 105 | apltran -igm processing/hyperspectral/flightlines/georeferencing/igm/h153{:02}1b_igm.bil \ |
| 106 | -output processing/hyperspectral/flightlines/georeferencing/igm/h153{:02}1b_igm_utm.bil -outproj utm_wgs84N ZZ \ |
| 107 | >> processing/hyperspectral/logfiles/h-{:02}-log |
| 108 | }}} |
| 109 | where ZZ ist the UTM zone. |
| 110 | |
| 111 | And finally aplmap: |
| 112 | {{{ |
| 113 | aplmap -igm processing/hyperspectral/flightlines/georeferencing/igm/h153{:02}1b_igm_utm.bil -lev1 processing/hyperspectral/flightlines/level1b/h153{:02}1b.bil \ |
| 114 | -mapname processing/hyperspectral/flightlines/georeferencing/mapped/h153{:02}3b_mapped.bil -bandlist ALL -pixelsize 1 1 -buffersize 4096 -outputdatatype uint16 \ |
| 115 | >> processing/hyperspectral/logfiles/h-{:02}-log |
| 116 | }}} |
| 117 | |
| 118 | If you create a script to run that code for each line, then the processing should be mostly done and can also create the xml files |
| 119 | {{{ |
| 120 | cmd= 'aplxml.py --line_id=-{} \ |
| 121 | --config_file "/data/nipigon1/scratch/arsf/2006/flight_data/spain/WM06_13-2006_153_Rodalquilar/processing/hyperspectral/2006153.cfg" \ |
| 122 | --output=/users/rsg/arsf/arsf_data/2006/flight_data/spain/WM06_13-2006_153_Rodalquilar/processing/delivery/WM06_13-153-hyperspectral-20211005/flightlines/line_information/h153{:02}1b.xml \ |
| 123 | --meta_type=i --sensor="hawk" \ |
| 124 | --lev1_file=/data/nipigon1/scratch/arsf/2006/flight_data/spain/WM06_13-2006_153_Rodalquilar/processing/hyperspectral/flightlines/level1b/h153{:02}1b.bil \ |
| 125 | --igm_header=/data/nipigon1/scratch/arsf/2006/flight_data/spain/WM06_13-2006_153_Rodalquilar/processing/hyperspectral/flightlines/georeferencing/igm/h153{:02}1b_igm.bil.hdr \ |
| 126 | --logfile={} \ |
| 127 | --raw_file=/data/nipigon1/scratch/arsf/2006/flight_data/spain/WM06_13-2006_153_Rodalquilar/processing/hyperspectral/flightlines/level1b/hdf_files/h153{:02}1b.hdf \ |
| 128 | --reprocessing --flight_year yyyy --julian_day jjj --projobjective="-" --projsummary="-"'.format(line,line,line,line,logfile,line)' |
| 129 | }}} |
| 130 | |
| 131 | |
| 132 | Please note the `--reprocessing` flag is needed in this case. |
| 133 | |
| 134 | That way your general script structure in python should look like this |
| 135 | {{{ |
| 136 | #A python script for running processed data |
| 137 | for line in range (1, len(flightlines)): |
| 138 | =TODO: define cmd such as aplcorr, apltran... With the examples above |
| 139 | cmd = "EDIT HERE".format(line) =commands to run are above: aplcorr, apltran, aplmap and aplxml |
| 140 | stream = os.popen(cmd) |
| 141 | output = stream.read() |
| 142 | print(output) |
| 143 | }}} |
| 144 | |
| 145 | |
| 146 | If everything went according to plan, then you should have all ready for creating a delivery. |
| 147 | |
| 148 | |
| 149 | == Delivery creation == |
| 150 | You should create the structure first |
| 151 | {{{ |
| 152 | make_arsf_delivery.py --projectlocation <insert location and name> \ |
| 153 | --deliverytype hyperspectral --steps STRUCTURE |
| 154 | }}} |
| 155 | If happy, then run again with `--final` |
| 156 | |
| 157 | Now, you should first check the other steps, run |
| 158 | {{{ |
| 159 | make_arsf_delivery.py --projectlocation <insert location and name> \ |
| 160 | --deliverytype hyperspectral --notsteps STRUCTURE |
| 161 | }}} |
| 162 | |
| 163 | Inspect the output, specially the files that will be moved. In this case, might be easier to move the files yourself (or copy them if unsure) and skip this step before --final. For the Eagle and Hawk, most of the steps will run as expected except morst liklley the PROJXML step. For this one, it might need extra information |
| 164 | 1 --maxscanangle=0 --area Sitia --piemail unknown --piname "G Ferrier" --projsummary "-" --projobjective "-" ` |
| 165 | |
| 166 | Or you can simply run aplxml to create this xml file an example is: |
| 167 | {{{ |
| 168 | aplxml.py --meta_type p --project_dir /users/rsg/arsf/arsf_data/2005/flight_data/greece/130_MC04_15 --output /users/rsg/arsf/arsf_data/2005/flight_data/greece/130_MC04_15/processing/delivery/MC04_15-2005-130/project_information/MC04_15-2005_130-project.xml --config_file /users/rsg/arsf/arsf_data/2005/flight_data/greece/130_MC04_15/processing/hyperspectral/2005130_from_hdf.cfg --lev1_dir /users/rsg/arsf/arsf_data/2005/flight_data/greece/130_MC04_15/processing/delivery/MC04_15-2005-130/flightlines/level1b/ --igm_dir /users/rsg/arsf/arsf_data/2005/flight_data/greece/130_MC04_15/processing/hyperspectral/flightlines/georeferencing/igm/ --area Sitia --piemail unknown --piname "G Ferrier" --projsummary "-" --projobjective "-" --project_code "MC04_15" |
| 169 | }}} |
| 170 | |
| 171 | If everything went according to plan, the delivery should be nearly all done. You are likely to encounter new errors along the processing chain so pay special attention to all steps error messages. |
| 172 | |
| 173 | |
| 174 | === Delivery Readme file === |
| 175 | Once the delivery creation was succesful, the only thing left should be creating a Readme file. Simply run the command: |
| 176 | `generate_readme_config.py -d <delivery directory> -r hyper -c <config_file>` |
| 177 | This will create a config file for the delivery. If there are no mask files, the apl example command are likely to fail. You will need to enter apl commands manually. Do not leave the aplmask as an empty field as the scripts will fail. Enter a "void" string to later remove from the tex file. |
| 178 | |
| 179 | Once again, if there are no mask files, the readme creation (create_latex_hyperspectral_apl_readme.py ) will fail as it tries to read the underflows and overflows. You need to skip this step by running the script with --skip_outflows. A simple example is: |
| 180 | `create_latex_hyperspectral_apl_readme.py -o . -f hyp_genreadme-airborne.cfg --skip_outflows -s eagle` |
| 181 | |
| 182 | This should create the Readme file. Edit the tex file and remove all references to the mask files and any other section that does not apply. Complete the data quality section as required, you can use the reprocessed data quality report as a template or another reprocessed dataset as an example (such as 153 2006). |