Version 57 (modified by emca, 13 years ago) (diff) |
---|
Leica digital camera processing
The RCD produces raw files that need to be processed in order to create TIFF files. The processing basically uses a gain and offset scaling and (probably) corrects for lens distortion. See the RCD page for more details, including filename convention.
Raw to Tiff
The first stage in processing the photographic data is to convert the raw file format into a 16-bit tiff format. The procedure for processing raw images to tif images can be found here.
Post-processing
Create temporary thumbnail images of the processed tiff files using photo2thumb.py. Open the first image with eog and use either spacebar to scroll though the images or view/slideshow. Remove any images that that are over/under exposed.
Check there is a *_camera.sol file in the IPAS/proc directory. If there is not, then you will need to create one. See details here: http://arsf-dan.nerc.ac.uk/trac/wiki/Procedures/ProcessingChainInstructions/NavigationProcessing
The next stages of the processing incorporate:
- creating a kml file, viewing in goggle earth and removing any images that are not relevant to the study area
- updating the photograph event file with the post-processed positional data and the exterior camera angles
- tagging the images with positional and navigational data as well as project information
- renaming the images to conform to ARSF standards
- generating thumbnail images of the tiffs.
For "perfect" projects this can all be done using one script which will generate a delivery directory and populate it with correctly processed data. Unfortunately a lot of projects are not perfect and may require a more hands on approach. A step by step procedure can be found here for post-processing data for problem projects.
To improve the chances of this single script approach working, follow the stages below to set up the data and remove as many anomalies as possible.
- Create a KML file and view it in Google Earth. Check the rcd/logs file for an *!ImageEvents1.csv. There may be some empty files with this name so be sure to choose the correct one. If there are more then one then you need to specify which one to use.
- Run kmlise_project.py -d <main_project_dir> -e ***ImageEvents1.csv > kml_file.kml - this creates a kml file using the !ImageEvents1-processed.csv file NOTE- this program currently falls over unless the eagle directory is clear of unrecognised files
- Open the kml file in google earth
- Note which blocks of photographs do not overlap with any Eagle/Hawk data. Usually consists of a group of photographs at the start of the survey where they set up the exposure rates etc for the camera. Often the log sheet notes which images too.
- Delete these tif files (not the raws) since they are not required.
- Delete the kml file created above as it is no longer needed. An updated one will be created by the delivery script.
- Open IPAS CO (on the windows machine). This is used to create a new image event file with post-processed positional data and omega,phi,kappa values.
- Load in the *_camera.sol file and from rcd/logs the *!ImageEvents1.csv and *!PhotoId1.csv file
- Camera Orientation Direction: -90
- Event Offset: 0.006
- Output file as *!ImageEvents1-processed.csv, file format ASCII Output
- Check the event log file for erroneous entries
- If there is no log file then this approach can not be used. See below section on tagging without log files.
- Anything with a -1 in GPS time will not be able to be tagged fully, but only with project information data. If possible, you might be able to use the SensorStats log file to estimate the GPS time of the erroneous events. Use the time differences in the log file to estimate the GPS time. Note down any image names you do this to so that it can be put in the Read Me. This is probably no longer worth doing - seems to be too imprecise
- Try running the script. Pipe the output to a text file in case you wish to review it afterwards.
Example command:
make_delivery_folder.sh -c -d ~arsf/arsf_data/2010/flight_data/uk/project_dir -y 2010 -j 297 -p EX01_01 -a "Example Site" -e ~arsf/arsf_data/2010/flight_data/uk/project_dir/leica/rcd/logs/ImageEvent1-processed.csv -s ~arsf/arsf_data/2010/flight_data/uk/project_dir/ipas/proc/2010_camera.sol -n "PI Name" | tee camera_delivery.log
If the script fails then you will have to fix the problem and try again, or follow the individual stages listed here. Possible causes of failure, excluding the ones previously mentioned above, could be:
- SOL file GPS times do not overlap with photograph log file times. Either fix the SOL (if possible) else use the logfile to tag the images (and mention in the Read_Me)
Editing the Read me file
An ascii Read me file is no longer automatically generated from the above script. Instead, a config file for the latex PDF script is created. This will need some editing. Also remember to add information on any photos which could not be tagged fully, or any images which look like they have anomalies or over/under exposure. For more information on how to generate the PDF file see here?
Subsequent processing
There are several other steps that could be undertaken:
- orthorectification (map the photos with respect to the ground/aircraft position)
- ? geocorrection (map the photos with respect to the ground + a DEM) - possibly only Bill's azgcorr mods could do this
- compositing orthorectified photos and seam-line adjustment
- compositing is easy, but will have ugly problems when you get different views on an object with vertical structure
- to improve the look of this, you have to manually edit the positioning of the joins - this is currently a very manual process and we do not currently have software for it
Attachments (1)
-
photo_processing_screenie.png
(92.3 KB) -
added by mggr 15 years ago.
Example processing screen
Download all attachments as: .zip