Changes between Initial Version and Version 1 of Procedures/Eagle_and_Hawk-reprocessing


Ignore:
Timestamp:
Oct 13, 2021, 3:19:19 PM (3 years ago)
Author:
asm
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Procedures/Eagle_and_Hawk-reprocessing

    v1 v1  
     1= Eagle and Hawk Reprocessed data =
     2
     3So you have to reprocess data... Let's see if we can help you out with this. Good luck!
     4
     5
     6== Acessing the Data ==
     7
     8First of all, if the data is not in our system, download the data from CEDA: https://data.ceda.ac.uk/neodc/arsf/
     9The password is here: https://rsg.pml.ac.uk/intranet/trac/wiki/Projects/ARSF-DAN/Passwords
     10
     11Ideally there would be RAW data available and you can process this dataset as any other. If not, then download the leve1b files and hdf files delivered. You will also need to download the log file and any documentation you can find.
     12
     13
     14== Create project directory and setup==
     15Create a project directory where in appropriate location: </users/rsg/arsf/arsf_data/year/flight_data/country/project>
     16
     17Choose an appropiate project name <ProjCode-year-_jjjs_Site> and create the project directories as "arsf" user by using
     18`build_structure.py -p . `
     19Then change to user 'airborne' and create the missing directories. If the year is not found in folder structure, you can build_structure from another year and simply edit manually the directories or move them.
     20
     21Once the structure is created, copy the level1b files to processing/hyperspetral/flightlines/level1b/
     22
     23If there is any project info needed not found in the log. You might be able to get from the hdf files.
     24First of all, get the hdf code activated for you
     25`source ~utils/python_venvs/pyhdf/bin/activate`
     26
     27Now you can get extra info from hdf files for example like this:
     28`get_extra_hdf_params.py e153081b.hdf `
     29
     30You might need to edit the missing information for some of the scripts. One example is:
     31{{{
     32--base="Almeria" --navsys="D.Davies" --weather="good" --projcode="WM06_13" --operator="S. J. Rob" --pi="R. Tee" --pilot="C. Joseph"
     33}}}
     34
     35If the project is not in the database or the processing status page, it is better to include it (just follow the usual steps for unpacking a new project). Otherwise, many of the commands below will need that extra metadata information or might simply fail.
     36
     37
     38== Extracting navigation and setting up==
     39
     40Inspect the level1b hdr files. For running APL the eagle and Hawk need the binning and "x start" in the hdr as well as "Acquisition Date", "Starting time" and "Ending time". You can get some of that info by running
     41
     42`hdf_reader.py --file e153031b.hdf --item MIstime`
     43Where MIstime is the Starting Time that will be returned in format `hhmmss`. A list of keywords are available in the az guide found under az_docs. You will also need (if not present in the data) the keywords --MIdate and --MIetime. You need to calculate manually the binning and the x start following visual inspection of the hdf lines in tuiview as that information is usually not saved in the hdf file.
     44
     45It is possible that the level1b hdr files contain errors as well. Please double check the most common errors:
     46-Sensor name and sensor id for Hawk is showing details for Eagle. Correct manually for: 300011, sensor type   = SPECIM Hawk.
     47-reflectance scale factor = 1000.000     -> Should be ->    Radiance data units = nW/(cm)^2/(sr)/(nm)
     48
     49An example of corrected file should look like this:
     50{{{
     51binning = {2, 2}
     52x start = 27
     53acquisition date = DATE(dd-mm-yyyy): 02-06-2006
     54GPS Start Time = UTC TIME: 10:58:19
     55GPS Stop Time = UTC TIME: 11:03:49
     56wavelength units         = nm
     57Radiance data units = nW/(cm)^2/(sr)/(nm)
     58}}}
     59
     60An example of corrected file should look like this:
     61{{{
     62binning = {2, 2}
     63x start = 27
     64acquisition date = DATE(dd-mm-yyyy): 02-06-2006
     65GPS Start Time = UTC TIME: 10:58:19
     66GPS Stop Time = UTC TIME: 11:03:49
     67wavelength units         = nm
     68Radiance data units = nW/(cm)^2/(sr)/(nm)
     69}}}
     70
     71On the next step, you need to **create the nav files** by extracting the navigation information from the hdf files. Run the following command for each line (remember to activate pyhdf above):
     72{{{
     73hdf_to_bil.py processing/hyperspectra/flightlines/level1b/hdf_files/e153101b.hdf  processing/hyperspectra/flightlines/navigation/interpolated/post_processed/e153101b_nav_post_processed.bil
     74}}}
     75
     76Please note that it is the original navigation rather than `1b_nav_post_processed` but the scripts will be looking for that keyword. Rename the files if necessary to match  the naming format.
     77
     78If you have extracted the navigation, then you will be able to automatically create a DEM model by specifying the BIL navigation files directory and running:
     79`create_apl_dem.py --aster -b ./processing/hyperspectral/flightlines/navigation/interpolated/post_processed/`
     80
     81
     82And  you can create the specim config file giving specific information
     83{{{
     84generate_apl_config.py --base="Almeria" --navsys="D.Davies" --weather="good" --projcode="WM06_13" --operator="S. J. Rob" --pi="R. Tee" --pilot="C. Joseph" --hdfdir processing/hyperspectral/flightlines/level1b/hdf_files
     85}}}
     86
     87**Please note you will not be able to process the data with the config file if there were no RAW data**. The script will print lots of errors but it will create a config file that will allow you to create the xml files much easier at the last stage. The config file will have all the metadata for each line including altitude (for the pixelsize calculator) and should have the UTM zone for the projection. However, the config file will think that all files are CASI; rename them and make sure all files are matching the Eagle and Hawk fligthlines. Doble check and complete all information (like any other airborne request) including projection, DEM, pixel size, bands to map...
     88
     89
     90
     91== Processing flightlines ==
     92As there is no raw data, you will need to run each apl command manually. Simply, create a python script that goes over the files and runs each step: aplcorr, apltran, aplmap and aplxml. An example of the aplcoorr is:
     93{{{
     94    aplcorr -vvfile ~arsf/calibration/2006/hawk/hawk_fov_fullccd_vectors.bil \
     95    -navfile processing/hyperspectral/flightlines/navigation/interpolated/post_processed/h153{:02}1b_nav_post_processed.bil \
     96    -igmfile processing/hyperspectral/flightlines/georeferencing/igm/h153{:02}1b_igm.bil \
     97    -dem processing/hyperspectral/dem/WM06_13-2006_153-ASTER.dem \
     98    -lev1file processing/hyperspectral/flightlines/level1b/h153{:02}1b.bil \
     99    >> processing/hyperspectral/logfiles/h-{:02}-log
     100}}}
     101The output of the script will be directly saved on the logfile "h-{:02}-log" allowing later on to create xml files. All the APL commands must only show once in the config file so therefore, you should check the output and make sure the lines can be processed before doing a loop over all lines. If something went wrong, you can simply delete the log file and create a new one. Otherwise, you will have to delete the extra commands manually.
     102
     103An example for running apltran is:
     104{{{
     105    apltran -igm processing/hyperspectral/flightlines/georeferencing/igm/h153{:02}1b_igm.bil \
     106    -output processing/hyperspectral/flightlines/georeferencing/igm/h153{:02}1b_igm_utm.bil -outproj utm_wgs84N ZZ \
     107    >> processing/hyperspectral/logfiles/h-{:02}-log
     108}}}
     109where ZZ ist the UTM zone.
     110
     111And finally aplmap:
     112{{{
     113    aplmap -igm processing/hyperspectral/flightlines/georeferencing/igm/h153{:02}1b_igm_utm.bil -lev1 processing/hyperspectral/flightlines/level1b/h153{:02}1b.bil \
     114    -mapname processing/hyperspectral/flightlines/georeferencing/mapped/h153{:02}3b_mapped.bil -bandlist ALL -pixelsize 1 1 -buffersize 4096 -outputdatatype uint16 \
     115    >> processing/hyperspectral/logfiles/h-{:02}-log
     116}}}
     117
     118If you create a script to run that code for each line, then the processing should be mostly done and can also create the xml files
     119{{{
     120    cmd= 'aplxml.py --line_id=-{} \
     121    --config_file "/data/nipigon1/scratch/arsf/2006/flight_data/spain/WM06_13-2006_153_Rodalquilar/processing/hyperspectral/2006153.cfg" \
     122    --output=/users/rsg/arsf/arsf_data/2006/flight_data/spain/WM06_13-2006_153_Rodalquilar/processing/delivery/WM06_13-153-hyperspectral-20211005/flightlines/line_information/h153{:02}1b.xml \
     123    --meta_type=i --sensor="hawk" \
     124    --lev1_file=/data/nipigon1/scratch/arsf/2006/flight_data/spain/WM06_13-2006_153_Rodalquilar/processing/hyperspectral/flightlines/level1b/h153{:02}1b.bil \
     125    --igm_header=/data/nipigon1/scratch/arsf/2006/flight_data/spain/WM06_13-2006_153_Rodalquilar/processing/hyperspectral/flightlines/georeferencing/igm/h153{:02}1b_igm.bil.hdr \
     126    --logfile={} \
     127    --raw_file=/data/nipigon1/scratch/arsf/2006/flight_data/spain/WM06_13-2006_153_Rodalquilar/processing/hyperspectral/flightlines/level1b/hdf_files/h153{:02}1b.hdf \
     128    --reprocessing --flight_year yyyy --julian_day jjj --projobjective="-" --projsummary="-"'.format(line,line,line,line,logfile,line)'
     129}}}
     130   
     131
     132Please note the `--reprocessing` flag is needed in this case.   
     133
     134That way your general script structure in python should look like this
     135{{{
     136#A python script for running processed data
     137for line in range (1, len(flightlines)):
     138    =TODO: define cmd such as aplcorr, apltran... With the examples above
     139    cmd = "EDIT HERE".format(line) =commands to run are above: aplcorr, apltran, aplmap and aplxml
     140    stream = os.popen(cmd)
     141    output = stream.read()
     142    print(output)
     143}}}
     144
     145
     146If everything went according to plan, then you should have all ready for creating a delivery.
     147
     148
     149== Delivery creation ==
     150You should create the structure first
     151{{{
     152make_arsf_delivery.py --projectlocation <insert location and name> \
     153                      --deliverytype hyperspectral --steps STRUCTURE
     154}}}                     
     155If happy, then run again with `--final`
     156
     157Now, you should first check the other steps, run
     158{{{
     159make_arsf_delivery.py --projectlocation <insert location and name> \
     160                      --deliverytype hyperspectral --notsteps STRUCTURE
     161}}}
     162
     163Inspect the output, specially the files that will be moved. In this case, might be easier to move the files yourself (or copy them if unsure) and skip this step before --final. For the Eagle and Hawk, most of the steps will run as expected except morst liklley the PROJXML step. For this one, it might need extra information
     1641 --maxscanangle=0 --area Sitia --piemail unknown --piname "G Ferrier" --projsummary "-" --projobjective "-"  `
     165
     166Or you can simply run aplxml to create this xml file an example is:
     167{{{
     168aplxml.py --meta_type p --project_dir /users/rsg/arsf/arsf_data/2005/flight_data/greece/130_MC04_15 --output /users/rsg/arsf/arsf_data/2005/flight_data/greece/130_MC04_15/processing/delivery/MC04_15-2005-130/project_information/MC04_15-2005_130-project.xml --config_file /users/rsg/arsf/arsf_data/2005/flight_data/greece/130_MC04_15/processing/hyperspectral/2005130_from_hdf.cfg --lev1_dir /users/rsg/arsf/arsf_data/2005/flight_data/greece/130_MC04_15/processing/delivery/MC04_15-2005-130/flightlines/level1b/ --igm_dir /users/rsg/arsf/arsf_data/2005/flight_data/greece/130_MC04_15/processing/hyperspectral/flightlines/georeferencing/igm/ --area Sitia --piemail unknown --piname "G Ferrier" --projsummary "-" --projobjective "-" --project_code "MC04_15"
     169}}}
     170
     171If everything went according to plan, the delivery should be nearly all done. You are likely to encounter new errors along the processing chain so pay special attention to all steps error messages.
     172
     173
     174=== Delivery Readme file ===
     175Once the delivery creation was succesful, the only thing left should be creating a Readme file. Simply run the command:
     176`generate_readme_config.py -d <delivery directory> -r hyper -c <config_file>`
     177This will create a config file for the delivery. If there are no mask files, the apl example command are likely to fail. You will need to enter apl commands manually. Do not leave the aplmask as an empty field as the scripts will fail. Enter a "void" string to later remove from the tex file.
     178
     179Once again, if there are no mask files, the readme creation (create_latex_hyperspectral_apl_readme.py ) will fail as it tries to read the underflows and overflows. You need to skip this step by running the script with --skip_outflows. A simple example is:
     180`create_latex_hyperspectral_apl_readme.py -o . -f hyp_genreadme-airborne.cfg  --skip_outflows -s eagle`
     181
     182This should create the Readme file. Edit the tex file and remove all references to the mask files and any other section that does not apply. Complete the data quality section as required, you can use the reprocessed data quality report as a template or another reprocessed dataset as an example (such as 153 2006).