# Comprehensive Tutorial: Step 1 - Prepare CCD Images for CT Reconstruction
## Overview
This notebook (`step1_prepare_CCD_images.ipynb`) is the first step in a CT (Computed Tomography) reconstruction workflow. It prepares CCD (Charge-Coupled Device) images acquired from neutron imaging experiments for tomographic reconstruction. The notebook handles data loading, normalization, various preprocessing operations, and exports the processed data ready for reconstruction.
## Table of Contents
1. [Introduction](#introduction)
2. [Prerequisites](#prerequisites)
3. [Workflow Overview](#workflow-overview)
4. [Detailed Step-by-Step Guide](#detailed-step-by-step-guide)
5. [Optional Processing Steps](#optional-processing-steps)
6. [Troubleshooting](#troubleshooting)
7. [Developer Notes](#developer-notes)
---
## Introduction
### Purpose
This notebook processes raw CCD projection images to:
- Load and organize tomography data
- Normalize images using Open Beam (OB) and Dark Current (DC) images
- Apply various preprocessing techniques (cropping, cleaning, rebinning)
- Prepare data for CT reconstruction algorithms
- Calculate center of rotation and tilt correction
### Key Features
- **Flexible normalization**: Optional OB/DC correction or use pre-normalized data
- **Multiple cleaning algorithms**: Remove outliers, dead pixels, and stripes
- **Interactive visualization**: Inspect data at each processing stage
- **Center of rotation calculation**: Automatic and manual modes
- **Multiple reconstruction methods**: Support for various algorithms
- **Configuration export**: Generates config files for subsequent reconstruction steps
---
## Prerequisites
### Required Data
1. **Sample projection images**: One image per rotation angle
2. **Open Beam (OB) images** (optional): Images without sample for normalization
3. **Dark Current (DC) images** (optional): Images with shutter closed for background subtraction
### System Requirements
- Python environment with required packages (defined in `requirements.txt`)
- Access to instrument data (e.g., IPTS directory at CG1D instrument)
- Sufficient memory for loading full 3D image stacks
### Required Libraries
- numpy, scipy for numerical operations
- tomopy for tomography algorithms
- Custom modules from `__code` package
---
## Workflow Overview
The notebook follows this general workflow:
```
1. Initialize & Setup
↓
2. Load Sample Data
↓
3. [Optional] Load OB/DC Images
↓
4. Retrieve Rotation Angles
↓
5. Select Data Fraction
↓
6. Load Data (creates master_3d_data_array)
↓
7. [Optional] Pre-processing
- Visualization
- Image exclusion
- Cropping
- Outlier removal
- Rebinning
↓
8. Normalization
↓
9. [Optional] Post-normalization Processing
- Rotation
- Cropping
- Rebinning
↓
10. Minus Log Conversion
↓
11. [Optional] Stripe Removal
↓
12. [Optional] Tilt Correction
↓
13. Center of Rotation Calculation
↓
14. [Optional] Test Reconstruction
↓
15. Configure Reconstruction
↓
16. Export Data & Config Files
```
---
## Detailed Step-by-Step Guide
### Step 1: Initialization
**Cell 1: Import libraries and set working directory**
```python
import warnings
warnings.filterwarnings('ignore')
from __code.step1_prepare_ccd_images import Step1PrepareCcdImages
from __code import system
system.System.select_working_dir(ipts="IPTS-33767", instrument="CG1D")
from __code.__all import custom_style
custom_style.style()
Step1PrepareCcdImages.legend()
```
**What it does:**
- Imports necessary modules
- Sets the working directory to your IPTS (Integrated Proposal Tracking System) experiment folder
- Applies custom styling for visualizations
- Displays the legend/instructions
**What to modify:**
- Replace `"IPTS-33767"` with your experiment number
- Replace `"CG1D"` with your instrument name if different
---
### Step 2: Input Sample Folder
**Cell 2: Select sample projections folder**
```python
o_white_beam = Step1PrepareCcdImages(system=system)
o_white_beam.select_top_sample_folder()
```
**What it does:**
- Creates the main processing object
- Opens a file browser to select the folder containing your projection images
- Each file should represent one rotation angle
**Instructions:**
- Navigate to the folder containing your sample projection images
- Typically one TIFF file per angle
- Click "Select" to confirm
---
### Step 3: Input Open Beam (OB) Images (OPTIONAL)
**Cell 3: Select OB images**
```python
o_white_beam.select_ob_images()
```
**What it does:**
- Opens a file browser to select Open Beam (reference) images
- These are images acquired without the sample in the beam path
**When to use:**
- **Required** if your data needs normalization
- **Skip** if you're working with pre-normalized data
**Instructions:**
- Select multiple OB images (they will be averaged)
- Choose images acquired under the same conditions as your sample
---
### Step 4: Input Dark Current (DC) Images (OPTIONAL)
**Cell 4: Select DC images**
```python
o_white_beam.select_dc_images()
```
**What it does:**
- Opens a file browser to select Dark Current images
- These are images acquired with the shutter closed (no beam)
**When to use:**
- Use when you selected OB images
- Ignored if you skipped OB selection
**Instructions:**
- Select multiple DC images (they will be averaged)
- These should be acquired with the same exposure time as your data
---
### Step 5: Retrieve Projection Angles
**Cell 5-7: Configure and retrieve angles**
```python
# Cell 5: Select method to retrieve angles
o_white_beam.how_to_retrieve_angle_value()
# Cell 6: Retrieve angles
o_white_beam.retrieve_angle_value()
# Cell 7: Test angle values
o_white_beam.testing_angle_values()
```
**What it does:**
- **Cell 5**: Choose how to extract rotation angles from your data
- From filename pattern
- From metadata in image files
- From separate angle file
- **Cell 6**: Executes the angle retrieval
- **Cell 7**: Displays the angles found and verifies they're reasonable
**Instructions:**
- Select the appropriate method based on your data format
- Verify angles are in the expected range (typically 0-360° or 0-180°)
- Check that angles are sorted correctly
---
### Step 6: Select Data Fraction
**Cell 8-9: Use all data or subset**
```python
# Cell 8: Choose to use all or fraction
o_white_beam.use_all_or_fraction()
# Cell 9: Select percentage (if using fraction)
o_white_beam.select_percentage_of_data_to_use()
```
**What it does:**
- Allows you to use only a percentage of your data for testing
- Useful for quick processing trials before running the full dataset
**Instructions:**
- Select "All" for final processing
- Select "Fraction" for testing with a slider to choose percentage
- Recommended: Use 10-20% for initial tests
---
### Step 7: Load Data
**Cell 10: Load projection data**
```python
o_white_beam.load_data()
```
**What it does:**
- Loads all selected projection images into memory
- For white beam mode: sums all counts (losing TOF information)
- Sorts runs by increasing angle value
- Creates the `master_3d_data_array` dictionary
**Developer Notes:**
- This creates key data structures:
-`master_3d_data_array`: Main 3D data dictionary
-`list_of_images`: List of TIFF files
-`final_list_of_angles` and `final_list_of_angles_rad`: Angle arrays
**Time considerations:**
- Can take several seconds to minutes depending on:
- Number of images
- Image size
- Available RAM
---
## Optional Processing Steps
### Visualization (After Loading)
**Cell 11-12: Visualize raw data**
```python
# Cell 11: Select visualization mode
o_white_beam.how_to_visualize()
# Cell 12: Display images
o_white_beam.visualize_raw_data()
```
**Options:**
- **All images**: Shows all projections (can be slow)
- **Visual verification of raw, OB, DC**: Quick check of key images
**Purpose:**
- Verify data loaded correctly
- Check for obvious issues (hot pixels, beam variations)
- Inspect OB/DC quality
---
### Image Exclusion
**Cell 13-16: Exclude problematic images**
```python
# Cell 13: Select exclusion mode
o_white_beam.selection_mode()
# Cell 14: Define images to exclude
o_white_beam.process_exclusion_mode()
# Cell 15: Execute exclusion
o_white_beam.exclude_this_list_of_images()
```
**When to use:**
- Remove corrupted images
- Exclude images with beam fluctuations
- Remove angles with sample movement
**Exclusion modes:**
- **Manual selection**: Click to select from list
- **Range definition**: Specify angle or index ranges
- **List input**: Provide comma-separated list
---
### Pre-processing Crop
**Cell 16-17: Crop raw data**
```python
# Cell 16: Define crop region
o_white_beam.pre_processing_crop_settings()
# Cell 17: Apply cropping
o_white_beam.pre_processing_crop()
```
**What it does:**
- Crops all images to a defined region of interest
- Reduces data size for faster processing
- Updates `master_3d_data_array`
**When to use:**
- Remove detector edges with artifacts
- Focus on region containing sample
- Reduce memory requirements
**Instructions:**
- Draw a rectangle on the displayed image
- Ensure sample is fully contained within crop region
---
### Remove Outliers
**Cell 18-21: Clean images**
```python
# Cell 18: Select cleaning algorithms
o_white_beam.clean_images_settings()
# Cell 19: Configure histogram method (if selected)
o_white_beam.clean_images_setup()
# Cell 20: Execute cleaning
o_white_beam.clean_images()
```
**Available algorithms:**
1. **Histogram method** (in-house):
- Removes dead pixels (first bin)
- Removes abnormally high counts (last bin)
- Configurable bin thresholds
2. **Tomopy remove_outlier**:
- Removes bright spots
- Median filter-based approach
3. **Scipy gamma_filter**:
- Statistical outlier removal
**Instructions:**
- Select one or more algorithms
- For histogram: adjust which bins to consider as outliers
- Run cleaning and verify results in visualization
**Developer Note:** Updates `master_3d_data_array`
---
### Visualization After Cleaning
**Cell 22-24: Visualize cleaned data**
```python
# Cell 22: Select visualization mode
o_white_beam.how_to_visualize_after_cleaning()
# Cell 23: Display cleaned images
o_white_beam.visualize_cleaned_data()
```
**Purpose:**
- Compare before/after cleaning
- Verify outliers were properly removed
- Check that sample features weren't affected
---
### Rebinning Before Normalization
**Cell 24-26: Rebin pixels**
```python
# Cell 24: Configure rebinning
o_white_beam.rebin_settings()
# Cell 25: Execute rebinning
o_white_beam.rebin_before_normalization()
# Cell 26-27: Visualize rebinned data
o_white_beam.visualization_normalization_settings()
o_white_beam.visualize_rebinned_data(before_normalization=True)
```
**What it does:**
- Combines adjacent pixels to reduce data size
- Improves signal-to-noise ratio
- Trades spatial resolution for processing speed
**Common binning factors:**
- 2×2: Reduces data by 4×, modest resolution loss
- 4×4: Reduces data by 16×, significant resolution loss
- Asymmetric (e.g., 2×4): Different binning in x and y
**When to use:**
- Large detector images (>2048×2048)
- Low signal-to-noise data
- Quick test reconstructions
**Developer Note:** Updates `master_3d_data_array`
---
### Normalization
**Cell 28-31: Normalize projections**
```python
# Cell 28: Configure normalization settings
o_white_beam.normalization_settings()
# Cell 29: Select background ROI (if using)
o_white_beam.normalization_select_roi()
# Cell 30: Execute normalization
o_white_beam.normalization()
```
**Normalization formula:**
```
normalized = (sample - DC) / (OB - DC)
```
**Advanced options:**
1. **Sample background ROI**:
- Select region without sample
- Scales images so this region has transmission = 1
- Corrects for beam profile variations
2. **Background ROI matching**:
- Matches sample background to OB background
- Improves normalization accuracy
**Instructions:**
- Enable desired normalization enhancements
- If using ROI, draw rectangle outside sample area
- Execute normalization
**Developer Note:** Creates `normalized_images` 3D array
---
### Visualization After Normalization
**Cell 32-33: View normalized data**
```python
# Cell 32: Configure visualization
o_white_beam.visualization_normalization_settings()
# Cell 33: Display normalized images
o_white_beam.visualize_normalization()
```
**What to check:**
- Transmission values typically 0-1
- Background should be uniform
- Sample features should be clear
---
### Export Normalized Data (OPTIONAL)
**Cell 34-35: Save normalized projections**
```python
# Cell 34: Select output folder
o_white_beam.select_export_normalized_folder()
# Cell 35: Export images
o_white_beam.export_normalized_images()
```
**When to use:**
- Save intermediate results
- Use normalized data in other software
- Archive processed data
---
### Post-normalization Rebinning
**Cell 36-39: Rebin after normalization**
```python
# Cell 36: Configure rebinning
o_white_beam.rebin_settings()
# Cell 37: Execute rebinning
o_white_beam.rebin_after_normalization()
# Cell 38-39: Visualize
o_white_beam.visualization_normalization_settings()
o_white_beam.visualize_rebinned_data()
```
**Note:** Similar to pre-normalization rebinning, but operates on normalized data
**Developer Note:** Updates `normalized_images` array
---
### Crop Normalized Data
**Cell 40-41: Crop after normalization**
```python
# Cell 40: Define crop region
o_white_beam.crop_settings()
# Cell 41: Apply crop
o_white_beam.crop()
```
**When to use:**
- Remove edges with normalization artifacts
- Further reduce data size
- Focus tightly on sample
**Developer Note:** Updates `normalized_images` array
---
### Rotate Data
**Cell 42-45: Align rotation axis**
```python
# Cell 42: Check if rotation needed
o_white_beam.is_rotation_needed()
# Cell 43: Select rotation angle
o_white_beam.rotate_data_settings()
# Cell 44: Apply rotation
o_white_beam.apply_rotation()
# Cell 45: Visualize rotated data
o_white_beam.visualize_after_rotation()
```
**Critical requirement:**
- **The rotation axis MUST be vertical for reconstruction algorithms to work**
**What it does:**
- Rotates images by specified angle (typically -90°, 90°, or 180°)
- Aligns rotation axis vertically
**Instructions:**
- Check if rotation axis is already vertical
- If not, select appropriate rotation angle
- Common case: Rotate by 90° or -90°
**Developer Note:** Updates `normalized_images` array
---
### Minus Log Conversion
**Cell 46: Convert to attenuation**
```python
o_white_beam.log_conversion_and_cleaning()
```
**What it does:**
- Converts transmission values to attenuation: `-log(transmission)`
- This is required for reconstruction algorithms
- Handles zero/negative values appropriately
**Mathematics:**
```
attenuation = -ln(normalized)
```
**Developer Note:** Creates `normalized_images_log` 3D array
---
### Visualization After Log
**Cell 47: View attenuation data**
```python
o_white_beam.visualize_images_after_log()
```
**What to check:**
- Higher attenuation = more material
- Background should be near zero
- No NaN or Inf values
---
### Sinogram Visualization
**Cell 48: View sinograms**
```python
o_white_beam.visualize_sinograms()
```
**What is a sinogram:**
- Horizontal line through all projections at same height
- Shows how one slice appears at all angles
- Sinusoidal patterns indicate off-center features
**What to check:**
- Smooth sinusoidal curves
- No discontinuities or jumps
- Consistent intensity across angles
---
### Stripe Removal
**Cell 49-56: Remove ring artifacts**
This is a multi-step iterative process:
```python
# Step 1: Select data range for testing
o_white_beam.select_range_of_data_to_test_stripes_removal()
# Step 2: Select algorithms to test
o_white_beam.select_remove_strips_algorithms()
# Step 3: Define algorithm settings
o_white_beam.define_settings()
# Step 4: Test on selected range
o_white_beam.test_algorithms_on_selected_range_of_data()
# Repeat steps 2-4 until satisfied
# Step 5: Choose when to apply
o_white_beam.when_to_remove_strips()
# Step 6: Execute on full dataset (if in-notebook selected)
o_white_beam.remove_strips()
# Visualize results
o_white_beam.display_removed_strips()
```
**What are stripes:**
- Vertical lines in sinograms
- Caused by dead pixels or detector calibration issues
- Cause ring artifacts in reconstruction
**Available algorithms:**
- Various Tomopy methods (Fourier-Wavelet, etc.)
- Each with adjustable parameters
**Workflow:**
1. **Test on subset**: Select a few representative slices
2. **Try algorithms**: Test different methods with various parameters
3. **Compare results**: Visual comparison of original vs. cleaned
4. **Iterate**: Adjust parameters and retest
5. **Apply to all**: Once satisfied, apply to full dataset
**Processing options:**
- **In notebook**: Process now (can be slow for large datasets)
- **Outside notebook**: Process during reconstruction (recommended for large data)
**Developer Note:** Updates `normalized_images_log` if in-notebook processing
---
### Tilt Correction
**Cell 57-58: Correct for sample tilt**
```python
# Cell 57: Select sample ROI
o_white_beam.select_sample_roi()
# Cell 58: Calculate and apply tilt
o_white_beam.perform_tilt_correction()
```
**What is tilt:**
- Sample not perfectly perpendicular to rotation axis
- Causes slanted features in reconstruction
- Calculated from 0° and 180° projections
**Instructions:**
- Draw ROI that contains the sample
- Algorithm finds optimal tilt correction
- Tilt is automatically applied
**Developer Note:** Updates `normalized_images_log` array
---
### Center of Rotation
**Cell 59-63: Calculate rotation center**
```python
# Cell 59: Configure settings
o_white_beam.center_of_rotation_settings()
# Cell 60: Select calculation mode
o_white_beam.run_center_of_rotation()
# Cell 61: Execute calculation
o_white_beam.determine_center_of_rotation()
# Cell 62: Visualize result
o_white_beam.display_center_of_rotation()
```
**What is center of rotation (COR):**
- Pixel location of the rotation axis in projections
- Critical parameter for reconstruction quality
- Incorrect COR causes blurring and doubling
**Calculation modes:**
1. **Automatic** (recommended):
- Uses algotom library
- Analyzes 0° and 180° projections
- Typically very accurate
2. **Manual**:
- Interactive adjustment
- Visual feedback
- Use when automatic fails
**Instructions:**
- Try automatic mode first
- Verify COR looks reasonable
- Use manual mode to fine-tune if needed
- Typical COR: near detector center (e.g., ~1024 for 2048px detector)
**Developer Note:** Uses `normalized_images_log` array; stores COR value
---
### Test Reconstruction
**Cell 63-64: Reconstruct test slices**
```python
# Cell 63: Select slices to reconstruct
o_white_beam.select_slices_to_use_to_test_reconstruction()
# Cell 64: Run reconstruction
o_white_beam.run_reconstruction_of_slices_to_test()
```
**Purpose:**
- Verify all parameters (COR, tilt, cleaning) are correct
- Quick feedback without processing entire volume
- Test different reconstruction algorithms
**Instructions:**
- Select 2-5 representative slices
- Include slices with clear features
- Check reconstruction quality
- Adjust parameters if needed and retest
**What to look for in reconstructions:**
- Sharp edges (correct COR)
- No ring artifacts (effective stripe removal)
- Level slices (correct tilt)
- Good contrast
- No unusual artifacts
---
### Select Reconstruction Method
**Cell 65: Choose algorithm(s)**
```python
o_white_beam.select_reconstruction_method()
```
**Available methods:**
- **FBP (Filtered Back Projection)**: Fast, standard method
- **Gridrec**: Optimized FBP, very fast
- **ART (Algebraic Reconstruction Technique)**: Iterative
- **SIRT (Simultaneous Iterative Reconstruction Technique)**: Iterative, better for limited angles
- **Others**: Various specialized algorithms
**Recommendations:**
- **Gridrec**: Best for most cases (fast, high quality)
- **FBP**: When you need standard method
- **SIRT**: For limited angle or noisy data
- Can select multiple to compare
---
### Reconstruction Settings
**Cell 66: Configure parameters**
```python
o_white_beam.reconstruction_settings()
```
**Key parameters:**
1. **Filter** (for FBP/Gridrec):
- shepp, cosine, hann, hamming, ramlak, etc.
- Affects sharpness vs. noise tradeoff
- Default: ramlak or shepp (good starting points)
2. **Number of iterations** (for iterative methods):
- More iterations = better convergence
- Typical: 10-50 iterations
3. **Other parameters**:
- Depending on selected algorithm
- Generally safe to use defaults initially
**Instructions:**
- Start with default values
- Adjust based on test reconstruction results
- Note your settings for documentation
---
### Export Configuration and Data
**Cell 67-68: Save everything**
```python
# Cell 67: Select what to export
o_white_beam.select_export_extra_files()
# Cell 68: Execute export
o_white_beam.export_extra_files(prefix='step1')
```
**Exported files:**
1. **Config file** (`step1_####.json`):
- Contains all parameters for reconstruction
- Used in step 2 of workflow
- Includes: angles, COR, tilt, algorithms, etc.
2. **Log file**:
- Complete record of processing steps
- Parameters used
- Timing information
3. **Processed projections**:
- Stack of pre-processed images ready for reconstruction
- TIFF format
- Contains all corrections applied
**Instructions:**
- Select output folder
- Verify sufficient disk space
- Export can take several minutes for large datasets
- Keep config file safe - you'll need it for reconstruction!
---
## Troubleshooting
### Common Issues and Solutions
#### 1. Memory Errors
**Symptom:** Notebook crashes or "Out of Memory" errors
**Solutions:**
- Use data fraction option (10-20%) for testing
- Apply cropping early to reduce data size
- Use rebinning to reduce resolution
- Close other applications
- Process on machine with more RAM
#### 2. Incorrect Angles
**Symptom:** Angles not sorted correctly or wrong range
**Solutions:**
- Check angle retrieval method
- Verify filename or metadata format
- Manually inspect a few files to understand format
- Use testing cell to verify angles
#### 3. Poor Normalization
**Symptom:** Uneven background, strange transmission values
**Solutions:**
- Verify OB and DC images are correct
- Check that exposure times match
- Use background ROI option
- Ensure OB images have good statistics (multiple images averaged)
#### 4. Stripes in Sinograms
**Symptom:** Vertical lines in sinogram visualization
**Solutions:**
- Use stripe removal algorithms
- Test different methods (Fourier-Wavelet often effective)
- Adjust parameters iteratively
- May need multiple algorithms in sequence
#### 5. Blurry Reconstruction
**Symptom:** Test reconstruction is blurred or doubled
**Solutions:**
- Incorrect COR - recalculate or adjust manually
- Try manual COR adjustment
- Check that rotation axis is vertical
- Verify tilt correction was applied
#### 6. Ring Artifacts
**Symptom:** Concentric circles in reconstruction
**Solutions:**
- Not all stripes were removed
- Return to stripe removal step
- Try different or multiple algorithms
- Increase aggressiveness of removal
#### 7. Tilted Reconstructions
**Symptom:** Features appear slanted in reconstruction
**Solutions:**
- Apply tilt correction
- Verify 0° and 180° projections are correctly identified
- Check that sample ROI includes entire sample
---
## Developer Notes
### Key Data Structures
1. **master_3d_data_array**
- Created: After load_data()
- Type: Dictionary
- Contains: Raw projection data
- Updated by: cropping, cleaning, rebinning (pre-normalization)
2. **normalized_images**
- Created: After normalization()
- Type: 3D numpy array
- Contains: Normalized transmission values
- Updated by: cropping, rotating, rebinning (post-normalization)
3. **normalized_images_log**
- Created: After log_conversion_and_cleaning()
- Type: 3D numpy array
- Contains: Attenuation values (-log of transmission)
- Updated by: stripe removal, tilt correction
-**This is the final data used for reconstruction**
4. **final_list_of_angles** / **final_list_of_angles_rad**
- Created: After retrieve_angle_value()
- Type: List
- Contains: Rotation angles in degrees and radians
### Processing Pipeline Summary
```
Raw Data → master_3d_data_array
↓ (optional cleaning, cropping, rebinning)
master_3d_data_array → normalized_images (via normalization)
↓ (optional rotation, cropping, rebinning)
normalized_images → normalized_images_log (via -log conversion)
↓ (optional stripe removal, tilt correction)
normalized_images_log → EXPORT for reconstruction
```
### Code Organization
The notebook uses the `Step1PrepareCcdImages` class which encapsulates:
- Data loading and management
- Image processing algorithms
- Visualization tools
- Interactive widgets for parameter selection
- Export functions
Each method typically:
1. Opens an interactive widget OR performs computation
2. Updates internal data structures
3. Optionally displays results
---
## Best Practices
### Recommended Workflow for New Data
1. **Quick Test Run (10% data)**
- Load 10% of data
- Skip OB/DC if pre-normalized
- Do minimal processing
- Calculate COR
- Test reconstruct 2-3 slices
-**Goal**: Verify data is good and understand it
2. **Optimize Processing (20-50% data)**
- Test different cleaning algorithms
- Find best stripe removal method
- Optimize COR calculation
- Try different reconstruction algorithms
-**Goal**: Find optimal parameters
3. **Full Processing (100% data)**
- Use optimized parameters from step 2
- Process full dataset
- Export everything
- Document your choices
-**Goal**: Final processed data for analysis
### Parameter Documentation
Always document:
- Rebinning factors used
- Cleaning algorithms and parameters
- Stripe removal method and settings
- COR value (automatic or manual)
- Tilt correction value
- Reconstruction algorithm chosen
- Any images excluded and why
### Visualization Strategy
- **Always visualize** after major processing steps
- Compare before/after for cleaning operations
- Check sinograms for stripes
- Verify test reconstructions before full processing
### Performance Tips
- Crop as early as possible
- Use rebinning for large detectors
- Process stripe removal outside notebook for large datasets
- Close visualizations when done to free memory
---
## Next Steps
After completing this notebook, you will have:
- Processed projection data (TIFF stack)
- Configuration JSON file
- Log file
**Proceed to:** `step2_slice_CCD_or_TimePix_images.ipynb` or `step3_reconstruction_CCD_or_TimePix_images.py` depending on whether you need to:
- **Step 2**: Select specific slices or regions to reconstruct
- **Step 3**: Perform full volume reconstruction
---
## Additional Resources
### Understanding CT Reconstruction
- **Projections**: 2D images of sample at different angles
- **Sinogram**: Rearranged data showing one slice across all angles
- **Reconstruction**: Mathematical process to recover 3D structure from projections
- **Center of Rotation**: Axis around which sample rotates
- **Tilt**: Sample misalignment with rotation axis
### Common Neutron Imaging Terms
- **White beam**: Neutrons of all energies (not time-of-flight)
- **TOF**: Time-of-flight, energy-resolved neutrons
- **OB**: Open beam, reference without sample
- **DC**: Dark current, detector background
- **Transmission**: Ratio of intensity through sample to reference
### File Formats
- **TIFF**: Standard image format for projection data
- **JSON**: Configuration files
- **TXT**: Angle lists, logs
---
## Summary
This notebook provides a comprehensive workflow for preparing CCD images for CT reconstruction. Key takeaways:
1. **Flexible**: Can skip or include normalization, many optional processing steps
2. **Interactive**: Visual feedback at each stage
3. **Iterative**: Test parameters on subset before full processing
4. **Complete**: Handles everything from raw data to reconstruction-ready files
5. **Documented**: Exports config files for reproducibility
**Remember:** The most critical parameters are:
- Center of rotation (COR)
- Angle values
- Data quality (normalization, cleaning)
- Proper log conversion
Take time to understand your data and optimize parameters using test reconstructions before processing the full dataset!