Opened 6 years ago

Last modified 6 years ago

#627 new flight processing

Fenix Calibration, 2018

Reported by: asm Owned by:
Priority: immediate Milestone:
Component: Processing: general Keywords: calibration, fenix
Cc: Other processors:


This ticket records the calibration of the Fenix sensor. The Fenix sensor was repaired by Specim and the detector array was replaced with a new one so timeline comparisons should not be made as it is essentially a new sensor. Specim also performed their own calibration in February 2018

Data location:

Change History (7)

comment:1 Changed 6 years ago by asm

-The 2018 Calibration took 4 days: 16-19 April 2018.

-The fenix sensor needed a replacement of the array so the new wavelengths are expected to be different

-Specim performed its own calibration on Feb 2018. On a first look the new hdr files by Specim looks to be in accordance with our wavelength calibration (close to the scale given by Fenix)

-Details recorded are within the spreadsheet: ./data_details/recorded_at_BAS_lab/Capture_Spreadsheet_2018-04.ods

-In short, the calibration went as follows:

Day1: Fenix wavelength cal+radiometric cal (no dark frames for 1 spectral lamp and some radiometric files). Also some Owl validation data with the black body.
Day2: Monochromator tests for different ranges of wavelengths in the SWIR +1 in the VNIR (no Fenix calibration data)
Day3: Fenix wavelength cal+radiometric cal (no black cloth used).
Day4: Fenix wavelength cal+radiometric cal. Also some Owl validation data with the black body.

Last edited 6 years ago by asm (previous) (diff)

comment:2 Changed 6 years ago by asm

Wavelength accuracy.

After playing quite a lot with the data I think I am happy now. I have decided not to use the Mylar filter that provides some features in the SWIR region because its accuracy is 1nm and we need to subtract the FWHMs of the same to give an approximation. We did have good data but the most precise is from day 1. Just a quick summary:

-Day 1: Best fit 33 wavelengths used, 6 of them for the SWIR. The offset is 0.076660 for the VNIR and -1.199219
The Oxygen lamp could not be used for d1 because it did not have dark frames but even though is not providing any usable feature for the other days as it is too weak.
-Day 3: We did not use the black cloth so the data quality is good but the other is even better. 35 wavelengths but 2 of them failed. We had some an extra wavelengths for 404.7 nm (Hg) but this region has very low sensitivity so should not be used. The offset for this day was: 0.370117 and -0.218750
-Day 4: Put back the black cloth, also good data with 30 wavelengths used but only 3 on the SWIR. Somehow we could not use the wavelenghts of the Hg-Ar that provides good anchors on the SWIR. Again the scripts could use the 404.7 nm peak which I think it should not be used. The offset for this case is: 0.190430 and 0.200000

As you can see, offsets are small in this season and all on the same region so the calibration made by Specim was nice. For all the above, we will use the wavelength calibration obtained from the data in the day 1. Right now is on my scratch only but I will move it so a safe place once done the full calibration procedure.

comment:3 Changed 6 years ago by asm

Wavelength accuracy.

It is convenient to note that the table created by has an error when creating calculating the means. This can be solved manually before creating the data quality report and an issue should be created. Also, as Laura already pointed before, some of the peaks in the VNIR looks to be influenced by close peaks of different spectral emissions resulting in bigger than real FWHMs. This is OK (one of the FWHMs is around ~4nm when spected to be <3nm) but would be nice to be improved. Finally, the means for FWHM and errors should be calculated separately for VNIR and SWIR bands rather than together (again will be solved manually for now).

comment:4 Changed 6 years ago by asm

Radiometric Calibration.

Started. The blue filter was not used in any of the datasets (Chris needed it from something else in Edinburgh). The blue filter is used to improve the signal of the SWIR bands but Chris said it is not needed. For the 3 days of datasets used data from the last day (day 4) as:
-Day 1 did not record dark frames for some of the radiometric files.
-Day 2, data was recorded without a black cloth to prevent reflections in the external metal parts of the sensors and jigs (data is good but some effect can happen).
-Day 4, dark frames recorded and black cloth used.

The first test gave good results. Created new calibration files as well as the bad pixels file (preliminary) and processed to level1 one of the lamp files to check results. All looking good. Will do the manual method for the bad pixel identification and update the file. Will also note in the ticket the settings used.

comment:5 Changed 6 years ago by asm

Radiometric Calibration

The new wavelengths start with 379.534 and ends with 2505.494. The file that contains the well know values for the integrating sphere starts at 3780 and ends at 2500. Due to that the first and last band were having null coefficients and those bands. Changed the values of the file to the given wavelengths assuming the apparent radiance is approximately the same. That solved the problem.

No blue filter was used. It makes the SWIR bands easier to detect so used the same file for both:

Created the bad pixel file (more details later). Processed to level 1 a couple of the boresight 2018 fenix lines to make sure everything is OK. Everything looks OK. Also processed the same files to level 1 using 2015 to make sure the SWIR coefficients were similar (2015 did have blue filter fitted). Everything looks good, slightly bigger coefficients for 2018 than 2015 in the SWIR bands and no spikes on the VNIR. Also compared with py6s against vegetation pixels; spectra looks sensible.

comment:6 Changed 6 years ago by asm

Bad Pixel Mapping

We have a new array so recreated from scratch the Method E (visual inspection) bad pixel list. Created a preliminary bad pixel file, made the Method E list visualizing the data on fastQC and then recreated the data with that new list and the same settings. Used data from day3 (day1 has no dark frames recorded and day4 did not have different integration times used). Settings used:

BANDSfirst = 0
BANDSlast = 347
subsensor = 0

A_thresh = 10
B_spectralwidth = 3
B_spatialwidth = 10
B_thresh = 10
C_thresh = 0.9825
D_spectralwidth = 1
D_spatialwidth = 18
D_thresh = 3
F_thresh = 0.1
F_bandranges = %(BANDSfirst)s %(BANDSlast)s
F_gainsfile = /data/turkana1/scratch/asm/working_on/processing_calibration_data/cal_data_20180416/processed/d1-no-mylar/fenix/output/

BANDSfirst = 348
BANDSlast = 621
subsensor = 1

A_thresh = 10
B_spectralwidth = 3
B_spatialwidth = 10
B_thresh = 20
C_thresh = 0.9825
D_spectralwidth = 1
D_spatialwidth = 18
D_thresh = 3
F_thresh = 0.3
F_bandranges = %(BANDSfirst)s %(BANDSlast)s
F_gainsfile = /data/turkana1/scratch/asm/working_on/processing_calibration_data/cal_data_20180416/processed/d1-no-mylar/fenix/output/

A total of 8688 bad pixels turned to be bad. Around ~3.64%

comment:7 Changed 6 years ago by asm

Calibration files

Created the calibration files for the different binnings. Placed everything under ~arsf/calibration/2018/fenix
Did try to compare the uniform and absolute files. The script has never been updated to Fenix but made some changes. It is hard to see what is the script actually doing but everything looks OK. I might go back to this in the future.

The only remaining part now is to create a few more plots and create the data quality report.

Note: See TracTickets for help on using tickets.