LiDAR data delivery

Once the data has been processed, it needs to be put into a delivery directory. This is the final file structure in which it will be sent to the customer.

What should be included

  1. LAS point cloud data
  2. ASCII version LIDAR point cloud data
  3. PDF version of flight logsheet
  4. readme file describing the data set + copyright notice
  5. screenshot of mosaic of all lines (full resolution) and zoomed image with vector overlay
  6. data quality report and further processing scripts
  7. DEM of the LIDAR data in British National Grid / UTM coordinates and another version patched with ASTER data
  8. Screenshot of DEM

For full waveform deliveries the following should also be included:

  1. Discrete LAS files
  2. Full waveform LAS files
  3. Navigation data (.sol file and .trj file)
  4. ASCII full waveform extractions - if requested by the PI

Procedure for creating a LIDAR data set delivery

Semi-scripted method

  1. See the guide at delivery library
  2. If you did not create the DEM and screenshots using the above script then create them manually using createimage.py, las_to_dsm.py and create_dem_from_lidar.py
  3. Generate the readme file.
    1. Create a config file for the read me using the generate_readme_config.py script, using the -d option to specify the delivery directory and the -r option to specify the type of config file to generate.
    2. Edit the config file and check all the items are filled in:
      • Any remarks about the data should be entered as a sentence in the "data_quality_remarks" section.
      • If vectors have been used then the accuracy should be entered in "vectors" (e.g. '5-10' if they're within 5m to 10m)
      • line_numbering should contain a space separated list of line names linking the logsheet to the processed files.
      • las_files should contain the path to the processed LAS file in order to extract the per flightline statistics
      • elevation_difference should contain a list of the elevation differences between overlapping flightlines. Enter the lines numbers and the difference in cm followed by a semicolon e.g 001 002 5; 002 003 4.5; etc. From 2017 onwards these numbers should be generated using `calculate_lidar_elevation_diffs.py`
      • All "compulsory" items should contain data
    3. Create a TeX file. Use the script create_latex_lidar_readme.py -f <config filename> Use the -w option for creating a full waveform readme
      1. This file can be reviewed and edited in any text editor if needed. If you edit remember to spell check. Note that this script may cause an error [Errno 21] Is a directory: '/tmp/'. If this occurs, fill in the intensity and dem images in the "Optional" section of the config file generated as a result of above steps.
    4. Create a PDF file by running pdflatex <tex_filename> twice.
      1. Review the read me and check carefully to see if it looks OK with all relevant information present
      2. Copy it to the delivery directory and remove any temporary files. Recommended to keep the TeX file until after delivery checking in case any edits are required

Manual Method

  1. Copy the template directory over to the project directory. Template directory at ~arsf/arsf_data/2011/delivery/lidar/template
  2. If you did not create a CSV file with the original names of LAS files versus new names you can create using the above script, you can also create it before renaming files using lidar_naming_csv.py
  3. Move the processed data to the delivery directory
    • Move the LAS binary files into delivery/flightlines/las1.2
    • REMEMBER THAT THESE LAS FILES SHOULD HAVE BEEN QC'ED AND CLASSIFIED FOR NOISY POINTS
    • Rename the files in the convention "LDR-PPPP_PP-yyyydddfnn.LAS" (details in readme, you should have created a CSV file with the new names in previous steps so rename accordingly ).
    • run las2txt.sh <delivery/flightlines/las1.0> <delivery/flightlines/ascii> Note, it is important to ensure that the correct options are used with this. Otherwise it will output the wrong format.
    • OR run las2txt --parse txyzicrna <lidarfilename> <outputfilename> for each file, outputting to the ascii_laser directory.
  4. You need to create a DEM from the lidar data to include with the delivery. For more details see the page on 'Creating a DEM from point clouds' .
  5. Include a PDF version of the flight logsheet with the delivery
  6. Make sure correct up to date data quality report (pdf version) is included in docs directory
  7. Create full resolution JPEGs of mosaic of all the lidar lines by intensity, a separate one of the intensity with vectors overlaid (if vectors are available) and one of the dem and put in screenshot directory (with createimage.py)
  8. Generate the readme as per semi-scripted method above

Note: Be sure that all the files outputted in the above steps conform to the file name formats specified here

Additional Steps for Full Waveform Deliveries

Create the following folders in the delivery directory:

  • discrete_laser - containing two folders; ascii_files & LAS_files
    • discrete_laser/ascii_files - move the ascii_laser folder to discrete_laser/ascii_files
    • discrete_laser/(las1.2|las1.0) - move the discrete LAS files to here - naming convention LDR-PPPP_PP-yyyydddfnn.LAS
  • fw_laser - move the full waveform LAS files to here - naming convention LDR-FW-PPPP_PP-yyyydddfnn.LAS
  • navigation - copy the .sol file and .trj file to here - naming convention PPPP_PP-yyyy-dddf.*
  • fw_extractions - this should contain the following information (This step is only required if an area is specified by the PI)
    • A folder for each requested area containing the relevant ASCII extractions
    • A text file describing the information given in the ASCII files. Template in ~arsf/arsf_data/2010/delivery/lidar/template/fw_extractions/extractions.txt. Use summarise_area.py to extract the extents for each area to enter into readme. Naming convention PPPP_PP-yyyy-dddf_extractions.txt
    • A jpg showing the location of the areas on an intensity image. Naming convention PPPP_PP-yyyy-dddf_extractions.jpg

The Read_me will need to be edited to include the above information. A template of the full waveform readme can be found at ~arsf/arsf_data/2010/delivery/lidar/template/fw_readme.txt

Semi-scripted full waveform delivery

The delivery library can be used to perform a full waveform delivery. Still under testing but should work, to get the script to perform a full waveform delivery include the --lidarfw argument to the make_arsf_delivery.py script. The example below will create the full waveform directory structure.

make_arsf_delivery.py --lidarfw --solfile posatt/ipas_honeywell/proc/20140621_082249.sol --lidardeminfo resolution=2 inprojection=UKBNG --projectlocation . --deliverytype lidar --steps STRUCTURE --final

When the script runs in full waveform mode the discrete classified lidar data needs to be in the las-classified folder, the classified full waveform data needs to be in las-fw-classified and trj files should be placed in trj all theses folders should be in PROJ_DIR/processing/als50, see below for a clipping of the tree command. Also ensure the logsheet pdf is in the admin folder as it will complain otherwise. I would suggest not running the whole delivery at once. What I found works best is running the steps before the rename stage then running the rest. There is still a few things that needs to be done manually, the intensity image is named mosaic_image.jpg not PROJ_CODE-year_day-intensity.jpg and the navigation SUP file is not copied across yet. The script that creates images has a bug somewhere, it puts .txt.jpg as the extension for the screenshots, running rename.sh -f .txt.jpg .jpg would fix that.

processing
├── als50
│   ├── 2014172.cfg
│   ├── 2014172.reg
│   ├── las
│   ├── las-classified
│   ├── las-fw
│   ├── las-fw-classified
│   ├── logfiles
│   └── trj

Once all these steps have been done the delivery is ready for checking.

Last modified 8 years ago Last modified on Jun 6, 2017, 2:09:53 PM