libera_utils.scene_id#
Module for mapping radiometer footprints to scene IDs.
This module provides functionality for identifying and classifying atmospheric scenes based on footprint data from satellite observations.
Functions
|
Calculate cloud fraction from clear sky area percentage. |
Calculate weighted optical depth from upper and lower cloud layers. |
|
Calculate cloud fraction weighted property from upper and lower layers. |
|
|
Calculate weighted cloud phase from upper and lower cloud layers. |
|
Calculate total surface wind speed from u and v vector components. |
|
Convert TRMM surface type to IGBP surface type classification. |
Classes
|
Specification for calculating a derived variable. |
|
Container for footprint data with scene identification capabilities. |
|
Standardized variable names for footprint data processing. |
|
Enumeration of surface types used in scene classification. |
|
Represents a single scene with its variable bin definitions. |
|
Defines scenes and their classification rules from CSV configuration. |
|
Enumeration of TRMM surface types used in ERBE and TRMM scene classification. |
- class libera_utils.scene_id.CalculationSpec(output_var: str, function: Callable, input_vars: list[str], output_datatype: type, dependent_calculations: list[str] | None = None)#
Specification for calculating a derived variable.
Defines the parameters needed to calculate a derived variable from input data, including the calculation function, required inputs, and any dependencies on other calculated variables.
- function#
The function to call for calculation
- Type:
Callable
- dependent_calculations#
List of other calculated variables that must be computed first, or None if no dependencies exist. Default is None.
Examples
>>> spec = CalculationSpec( ... output_var="cloud_fraction", ... function=cloud_fraction, ... input_vars=["clear_area"], ... output_datatype=float ... )
- Attributes:
- dependent_calculations
- class libera_utils.scene_id.FootprintData(data: Dataset)#
Container for footprint data with scene identification capabilities.
Manages satellite footprint data through the complete scene identification workflow, including data extraction, preprocessing, derived field calculation, and scene classification.
- Parameters:
data (xr.Dataset) – Input dataset containing required footprint variables
- _data#
Internal dataset of footprint data. During scene identification, scene IDs are added as variables to this dataset.
- Type:
xr.Dataset
- process_ssf_and_camera(ssf_path, scene_definitions)#
Process SSF and camera data to identify scenes
- process_cldpx_viirs_geos_cam_groundscene()#
Process alternative data format (not implemented)
- process_clouds_groundscene()#
Process cloud/ground scene data (not implemented)
Notes
This class handles the complete pipeline from raw satellite data to scene identification, including: 1. Data extraction from NetCDF files 2. Missing value handling 3. Derived field calculation (cloud fraction, optical depth, etc.) 4. Scene ID matching based on classification rules
Methods
from_ceres_ssf(ssf_path, scene_definitions)Process SSF (Single Scanner Footprint) and camera data to identify scenes.
Process cloud pixel/VIIRS/GEOS/camera/ground scene data format.
Process clouds/ground scene data format.
identify_scenes([additional_scene_definitions])Calculate scene IDs for all data points.
- _calculate_required_fields(result_fields: list[str])#
Calculate necessary derived fields on data from input FootprintVariables.
Computes derived atmospheric variables needed for scene identification, handling dependencies between calculated fields automatically.
- Parameters:
result_fields (list of str) – List of field names to calculate (e.g., ‘cloud_fraction’, ‘optical_depth’)
- Raises:
ValueError – If an unknown field is requested or if circular dependencies exist
Notes
This method modifies self._data in place to conserve memory. It automatically resolves dependencies between calculated fields (e.g., optical depth depends on cloud fraction being calculated first).
The calculation order is determined by dependency analysis and may require multiple passes. A maximum of 30 iterations is allowed to prevent infinite loops from circular dependencies.
Available calculated fields are defined in _CALCULATED_VARIABLE_MAP.
- _calculate_single_field_from_spec(spec: CalculationSpec, calculated: list[str])#
Calculate a single field from input FootprintVariables.
Applies the calculation function specified in the CalculationSpec to the input variables, creating a new variable in the dataset.
- Parameters:
spec (CalculationSpec) – Specification defining the calculation to perform
calculated (list of str) – List of variable names already available in the dataset
- Raises:
ValueError – If required input variables are not available in the dataset
- _convert_missing_values(input_missing_value: float)#
Convert input missing values in footprint data to output missing values.
This method standardizes missing value representations by converting from the input dataset’s missing value convention to the output convention used in FootprintData processing (np.NaN).
- Parameters:
input_missing_value (float) – Missing value indicator used in input data (e.g., -999.0, 9.96921e+36)
Notes
Handles two cases: - If input_missing_value is NaN: Uses np.isnan() for comparison - If input_missing_value is numeric: Uses direct equality comparison
Modifies self._data in place, replacing all occurrences of input_missing_value with np.NaN.
Examples
>>> footprint._data = xr.Dataset({'temp': [20.0, -999.0, 25.0]}) >>> footprint._convert_missing_values(-999.0) >>> print(footprint._data['temp'].values) array([20., nan, 25.])
- static _extract_data_from_CeresSSFNOAA20FM6Ed1C(dataset: Dataset) Dataset#
Extract data from CERES SSF NOAA-20 FM6 Edition 1C NetCDF file.
Reads specific variables from the hierarchical group structure of CERES SSF (Single Scanner Footprint) files and organizes them into a flat xarray Dataset with standardized variable names.
- Parameters:
dataset (xr.Dataset) – Open NetCDF dataset in CeresSSFNOAA20FM6Ed1C format
Notes
Data is extracted from NetCDF groups: - Surface_Map: Surface type information - Cloudy_Footprint_Area: Cloud properties (fraction, phase, optical depth) - Full_Footprint_Area: Wind vectors - Clear_Footprint_Area: Clear sky coverage
Array indexing: - surface_igbp_type[:,0]: First surface type estimate - layers_coverages[:,1] and [:,2]: Lower and upper cloud layers - cloud_*[:,0] and [:,1]: Lower and upper cloud layers
- _fill_column_above_max_value(column_name: str, threshold: float, fill_value=nan)#
Replace values above threshold with fill value for specified column.
- Parameters:
- Raises:
ValueError – If the specified column is not found in the dataset
Examples
>>> footprint._data = xr.Dataset({'cloud_fraction': [50, 120, 80]}) >>> footprint._fill_column_above_max_value('cloud_fraction', 100.0) >>> print(footprint._data['cloud_fraction'].values) array([50., nan, 80.])
- classmethod from_ceres_ssf(ssf_path: Path, scene_definitions: list[SceneDefinition])#
Process SSF (Single Scanner Footprint) and camera data to identify scenes.
Reads CERES SSF data, extracts relevant variables, calculates derived fields, and identifies scene classifications for each footprint.
- Parameters:
ssf_path (pathlib.Path) – Path to the SSF NetCDF file (CeresSSFNOAA20FM6Ed1C format)
scene_definitions (list of SceneDefinition) – List of scene definition objects to apply for classification
- Returns:
Processed dataset containing original variables and calculated fields ready for scene identification.
- Return type:
- Raises:
FileNotFoundError – If the SSF file cannot be found or opened
Notes
Processing steps: 1. Extract variables from SSF NetCDF groups 2. Apply maximum value thresholds to cloud properties 3. Calculate derived fields (cloud fraction, optical depth, wind speed, etc.) 4. Match footprints to scene IDs
Maximum value thresholds applied: - Cloud fraction: 100% - Cloud phase: 2 (ice) - Optical depth: 500
Examples
>>> footprint = FootprintData() >>> scene_defs = [SceneDefinition(Path("trmm.csv"))] >>> data = footprint.process_ssf_and_camera( ... Path("CERES_SSF_NOAA20_2024001.nc"), ... scene_defs ... )
- classmethod from_cldpx_viirs_geos_cam_groundscene()#
Process cloud pixel/VIIRS/GEOS/camera/ground scene data format.
- Raises:
NotImplementedError – This data format is not yet supported
Notes
TODO: LIBSDC-672 Implement processing for alternative data formats including: - Cloud pixel data - VIIRS observations - GEOS model data - Camera data - Ground scene classifications
- classmethod from_clouds_groundscene()#
Process clouds/ground scene data format.
- Raises:
NotImplementedError – This data format is not yet supported
Notes
TODO: LIBSDC-673 Implement processing for cloud and ground scene data formats.
- identify_scenes(additional_scene_definitions: list[Path] | None = None)#
Calculate scene IDs for all data points.
This method performs the actual scene identification algorithm on the processed footprint data. Currently a placeholder implementation that should be updated with the actual scene classification logic.
- additional_scene_definitionslist of pathlib.Path or None, optional
Additional scene definition CSV files to apply beyond the default TRMM and ERBE definitions. Default is None.
Notes
Default scene definitions used: - TRMM: Tropical Rainfall Measuring Mission scenes - ERBE: Earth Radiation Budget Experiment scenes TODO: LIBSDC-674 Add unfiltering scene ID algorithm
TODO: LIBSDC-589 Implement the scene ID matching algorithm. Scene identification
The implementation should: 1. Assign scene IDs to footprint based on variable ranges in scene definitions (default and custom) 2. Add scene ID variables as columns to self._data 3. Handle unmatched footprints appropriately
- class libera_utils.scene_id.FootprintVariables(value)#
Standardized variable names for footprint data processing.
This class defines consistent naming conventions for all variables used in the scene identification workflow, including both input variables from satellite data products and calculated derived fields.
Methods
capitalize(/)Return a capitalized version of the string.
casefold(/)Return a version of the string suitable for caseless comparisons.
center(width[, fillchar])Return a centered string of length width.
count(sub[, start[, end]])Return the number of non-overlapping occurrences of substring sub in string S[start:end].
encode(/[, encoding, errors])Encode the string using the codec registered for encoding.
endswith(suffix[, start[, end]])Return True if S ends with the specified suffix, False otherwise.
expandtabs(/[, tabsize])Return a copy where all tab characters are expanded using spaces.
find(sub[, start[, end]])Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end].
format(*args, **kwargs)Return a formatted version of S, using substitutions from args and kwargs.
format_map(mapping)Return a formatted version of S, using substitutions from mapping.
index(sub[, start[, end]])Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end].
isalnum(/)Return True if the string is an alpha-numeric string, False otherwise.
isalpha(/)Return True if the string is an alphabetic string, False otherwise.
isascii(/)Return True if all characters in the string are ASCII, False otherwise.
isdecimal(/)Return True if the string is a decimal string, False otherwise.
isdigit(/)Return True if the string is a digit string, False otherwise.
isidentifier(/)Return True if the string is a valid Python identifier, False otherwise.
islower(/)Return True if the string is a lowercase string, False otherwise.
isnumeric(/)Return True if the string is a numeric string, False otherwise.
isprintable(/)Return True if the string is printable, False otherwise.
isspace(/)Return True if the string is a whitespace string, False otherwise.
istitle(/)Return True if the string is a title-cased string, False otherwise.
isupper(/)Return True if the string is an uppercase string, False otherwise.
join(iterable, /)Concatenate any number of strings.
ljust(width[, fillchar])Return a left-justified string of length width.
lower(/)Return a copy of the string converted to lowercase.
lstrip([chars])Return a copy of the string with leading whitespace removed.
maketrans(x[, y, z])Return a translation table usable for str.translate().
partition(sep, /)Partition the string into three parts using the given separator.
removeprefix(prefix, /)Return a str with the given prefix string removed if present.
removesuffix(suffix, /)Return a str with the given suffix string removed if present.
replace(old, new[, count])Return a copy with all occurrences of substring old replaced by new.
rfind(sub[, start[, end]])Return the highest index in S where substring sub is found, such that sub is contained within S[start:end].
rindex(sub[, start[, end]])Return the highest index in S where substring sub is found, such that sub is contained within S[start:end].
rjust(width[, fillchar])Return a right-justified string of length width.
rpartition(sep, /)Partition the string into three parts using the given separator.
rsplit(/[, sep, maxsplit])Return a list of the substrings in the string, using sep as the separator string.
rstrip([chars])Return a copy of the string with trailing whitespace removed.
split(/[, sep, maxsplit])Return a list of the substrings in the string, using sep as the separator string.
splitlines(/[, keepends])Return a list of the lines in the string, breaking at line boundaries.
startswith(prefix[, start[, end]])Return True if S starts with the specified prefix, False otherwise.
strip([chars])Return a copy of the string with leading and trailing whitespace removed.
swapcase(/)Convert uppercase characters to lowercase and lowercase characters to uppercase.
title(/)Return a version of the string where each word is titlecased.
translate(table, /)Replace each character in the string using the given translation table.
upper(/)Return a copy of the string converted to uppercase.
zfill(width, /)Pad a numeric string with zeros on the left, to fill a field of the given width.
- class libera_utils.scene_id.IGBPSurfaceType(value)#
Enumeration of surface types used in scene classification.
These surface types are derived from IGBP (International Geosphere-Biosphere Programme) land cover classifications.
- IGBP_1 through IGBP_20
TRMM surface type categories (values: 1-20)
- Type:
- Attributes:
denominatorthe denominator of a rational number in lowest terms
imagthe imaginary part of a complex number
numeratorthe numerator of a rational number in lowest terms
realthe real part of a complex number
Methods
Return integer ratio.
bit_count(/)Number of ones in the binary representation of the absolute value of self.
bit_length(/)Number of bits necessary to represent self in binary.
Returns self, the complex conjugate of any int.
from_bytes(/, bytes[, byteorder, signed])Return the integer represented by the given array of bytes.
to_bytes(/[, length, byteorder, signed])Return an array of bytes representing an integer.
- property trmm_surface_type: TRMMSurfaceType#
Map IGBP surface type to corresponding TRMM surface type.
- Returns:
The corresponding TRMM surface type category
- Return type:
Examples
>>> TRMMSurfaceType.TRMM_1.igbp_surface_type <TRMMSurfaceType.HI_SHRUB: 1> >>> TRMMSurfaceType.TRMM_17.igbp_surface_type <TRMMSurfaceType.OCEAN: 0>
- class libera_utils.scene_id.Scene(scene_id: int, variable_ranges: dict[str, tuple[float, float]])#
Represents a single scene with its variable bin definitions.
A scene defines a specific atmospheric state characterized by ranges of multiple variables (e.g., cloud fraction, optical depth, surface type). Data points are classified into scenes when all their variable values fall within the scene’s defined ranges.
- variable_ranges#
Dictionary mapping variable names to (min, max) tuples defining the acceptable range for each variable. None values indicate unbounded ranges (no min or no max constraint).
- matches(data_point)#
Check if a data point belongs to this scene
Examples
>>> scene = Scene( ... scene_id=1, ... variable_ranges={ ... "cloud_fraction": (0.0, 50.0), ... "optical_depth": (0.0, 10.0) ... } ... ) >>> scene.matches({"cloud_fraction": 30.0, "optical_depth": 5.0}) True >>> scene.matches({"cloud_fraction": 60.0, "optical_depth": 5.0}) False
Methods
matches(data_point)Check if a data point falls within all variable ranges for this scene.
- matches(data_point: dict[str, float]) bool#
Check if a data point falls within all variable ranges for this scene.
- Parameters:
data_point (dict of str to float) – Dictionary of variable names to values
- Returns:
True if data point matches all variable ranges, False otherwise
- Return type:
Notes
A data point matches when: - All required variables are present in the data point - All variable values are within their specified ranges (inclusive) - No variable values are NaN
Range boundaries: - None for min_val means no lower bound - None for max_val means no upper bound - Both bounds are inclusive when specified
Examples
>>> scene = Scene( ... scene_id=1, ... variable_ranges={"temp": (0.0, 100.0), "pressure": (None, 1000.0)} ... ) >>> scene.matches({"temp": 50.0, "pressure": 900.0}) True >>> scene.matches({"temp": 150.0, "pressure": 900.0}) False >>> scene.matches({"temp": np.nan, "pressure": 900.0}) False
- class libera_utils.scene_id.SceneDefinition(definition_path: Path)#
Defines scenes and their classification rules from CSV configuration.
Loads and manages scene definitions from a CSV file, providing functionality to identify which scene a given set of atmospheric measurements belongs to.
- identify(data)#
Identify scene IDs for all data points in a dataset
- validate_input_data_columns(data)#
Validate that dataset contains all required variables
Notes
- Expected CSV format:
scene_id,variable1_min,variable1_max,variable2_min,variable2_max,… 1,0.0,10.0,20.0,30.0,… 2,10.0,20.0,30.0,40.0,…
Each variable must have both a _min and _max column. NaN or empty values indicate unbounded ranges.
Examples
>>> scene_def = SceneDefinition(Path("trmm.csv")) >>> print(scene_def.type) 'TRMM' >>> print(len(scene_def.scenes)) 42
Methods
identify(data)Identify scene IDs for all data points in the dataset.
Ensure input data contains all required FootprintVariables.
Ensure scene definition file contains valid column names, bin ranges, that classification parameters are not duplicated across IDs, and that there are no gaps in classification bins.
- _extract_variable_names(columns: Index) list[str]#
Extract unique variable names from min/max column pairs.
- Parameters:
columns (pd.Index) – Column names from the CSV
- Returns:
Sorted list of unique variable names
- Return type:
Notes
Variable names are extracted by removing the ‘_min’ or ‘_max’ suffix from column names. Only columns with these suffixes are considered as variable definitions.
Examples
>>> cols = pd.Index(['scene_id', 'temp_min', 'temp_max', 'pressure_min', 'pressure_max']) >>> scene_def._extract_variable_names(cols) ['pressure', 'temp']
- _identify_vectorized(data: Dataset, dims: list[str], shape: tuple[int, ...]) ndarray#
Vectorized scene identification for better performance.
Uses NumPy array operations to efficiently classify all data points simultaneously rather than iterating point-by-point.
- Parameters:
- Returns:
Array of scene IDs with shape matching input dimensions
- Return type:
np.ndarray
Notes
For each scene, creates a boolean mask identifying all matching points, then assigns the scene ID to those points. Earlier scenes in the list have priority for overlapping classifications.
- _parse_row_to_ranges(row: Series, variable_names: list[str]) dict[str, tuple[float | None, float | None]]#
Parse a CSV row into variable ranges.
- Parameters:
- Returns:
Dictionary mapping variable names to (min, max) tuples. None values indicate unbounded ranges (no constraint).
- Return type:
Notes
For each variable, looks for columns named {variable}_min and {variable}_max. NaN values in the CSV are converted to None to indicate unbounded ranges.
Examples
>>> row = pd.Series({'scene_id': 1, 'temp_min': 0.0, 'temp_max': 100.0, ... 'pressure_min': np.nan, 'pressure_max': 1000.0}) >>> scene_def._parse_row_to_ranges(row, ['temp', 'pressure']) {'temp': (0.0, 100.0), 'pressure': (None, 1000.0)}
- identify(data: Dataset) DataArray#
Identify scene IDs for all data points in the dataset.
Classifies each data point in the dataset by finding the first scene whose variable ranges match all the data point’s variable values.
- Parameters:
data (xr.Dataset) – Dataset containing all required variables for scene identification
- Returns:
Array of scene IDs with the same dimensions as the input data. Scene ID of -1 indicates no matching scene was found for that point.
- Return type:
xr.DataArray
- Raises:
ValueError – If the dataset is missing required variables
Notes
Scene matching uses first-match priority: if multiple scenes could match a data point, the first one in the definition list is assigned
Data points with NaN values in any required variable are not matched
The method logs statistics about matched/unmatched points and the distribution of scene IDs
Examples
>>> data = xr.Dataset({ ... 'cloud_fraction': ([('x',)], [20.0, 60.0, 85.0]), ... 'optical_depth': ([('x',)], [5.0, 15.0, 25.0]) ... }) >>> scene_def = SceneDefinition(Path("scenes.csv")) >>> scene_ids = scene_def.identify(data) >>> print(scene_ids.values) array([1, 2, -1]) # Last point didn't match any scene
- validate_input_data_columns(data: Dataset)#
Ensure input data contains all required FootprintVariables.
- Parameters:
data (xr.Dataset) – Dataset to validate
- Raises:
ValueError – If required variables are missing from the dataset, with a message listing all missing variables
Examples
>>> scene_def = SceneDefinition(Path("scenes.csv")) >>> scene_def.required_columns = ['cloud_fraction', 'optical_depth'] >>> data = xr.Dataset({'cloud_fraction': [10, 20]}) >>> scene_def.validate_input_data_columns(data) ValueError: Required columns ['optical_depth'] not in input data for TRMM scene identification.
- validate_scene_definition_file()#
Ensure scene definition file contains valid column names, bin ranges, that classification parameters are not duplicated across IDs, and that there are no gaps in classification bins.
- Raises:
NotImplementedError – This validation is not yet implemented
Notes
TODO: LIBSDC-589 Implement validation checks for: - Valid column naming conventions - Non-overlapping scene definitions - Complete coverage of parameter space (no gaps) - Consistent min/max value ordering
- class libera_utils.scene_id.TRMMSurfaceType(value)#
Enumeration of TRMM surface types used in ERBE and TRMM scene classification.
- Attributes:
denominatorthe denominator of a rational number in lowest terms
imagthe imaginary part of a complex number
numeratorthe numerator of a rational number in lowest terms
realthe real part of a complex number
Methods
Return integer ratio.
bit_count(/)Number of ones in the binary representation of the absolute value of self.
bit_length(/)Number of bits necessary to represent self in binary.
Returns self, the complex conjugate of any int.
from_bytes(/, bytes[, byteorder, signed])Return the integer represented by the given array of bytes.
to_bytes(/[, length, byteorder, signed])Return an array of bytes representing an integer.
- libera_utils.scene_id.calculate_cloud_fraction(clear_area: float | ndarray[Any, dtype[floating]]) float | ndarray[Any, dtype[floating]]#
Calculate cloud fraction from clear sky area percentage.
- Parameters:
clear_area (float or ndarray) – Clear area percentage (0-100)
- Returns:
Cloud fraction percentage (0-100), calculated as 100 - clear_area
- Return type:
float or ndarray
- Raises:
ValueError – If clear_area contains values less than 0 or greater than 100
Examples
>>> cloud_fraction(30.0) 70.0 >>> cloud_fraction(np.array([10, 25, 90])) array([90, 75, 10])
- libera_utils.scene_id.calculate_cloud_fraction_weighted_optical_depth(optical_depth_lower: float | ndarray[Any, dtype[floating]], optical_depth_upper: float | ndarray[Any, dtype[floating]], cloud_fraction_lower: float | ndarray[Any, dtype[floating]], cloud_fraction_upper: float | ndarray[Any, dtype[floating]], cloud_fraction: float | ndarray[Any, dtype[floating]]) float | ndarray[Any, dtype[floating]]#
Calculate weighted optical depth from upper and lower cloud layers.
Combines optical depth measurements from two atmospheric layers using cloud fraction weighting to produce a single representative optical depth value.
- Parameters:
optical_depth_lower (float or ndarray) – Optical depth for lower cloud layer (dimensionless)
optical_depth_upper (float or ndarray) – Optical depth for upper cloud layer (dimensionless)
cloud_fraction_lower (float or ndarray) – Cloud fraction for lower layer (0-100)
cloud_fraction_upper (float or ndarray) – Cloud fraction for upper layer (0-100)
cloud_fraction (float or ndarray) – Total cloud fraction (0-100)
- Returns:
Optical depth weighted by cloud fraction and summed across layers, or np.nan if no valid data or zero total cloud fraction
- Return type:
float or ndarray
See also
calculate_cloud_fraction_weighted_property_for_layerGeneral weighting function
Examples
>>> optical_depth(5.0, 15.0, 40.0, 60.0, 100.0) 11.0 # (5*40 + 15*60)/100
- libera_utils.scene_id.calculate_cloud_fraction_weighted_property_for_layer(property_lower: float | ndarray[Any, dtype[floating]], property_upper: float | ndarray[Any, dtype[floating]], cloud_fraction_lower: float | ndarray[Any, dtype[floating]], cloud_fraction_upper: float | ndarray[Any, dtype[floating]], cloud_fraction: float | ndarray[Any, dtype[floating]]) float | ndarray[Any, dtype[floating]]#
Calculate cloud fraction weighted property from upper and lower layers.
Computes a weighted average of a cloud property across two atmospheric layers, where the weights are determined by each layer’s contribution to total cloud fraction.
- Parameters:
property_lower (float or ndarray) – Property values for the lower cloud layer
property_upper (float or ndarray) – Property values for the upper cloud layer
cloud_fraction_lower (float or ndarray) – Cloud fraction for the lower cloud layer (0-100)
cloud_fraction_upper (float or ndarray) – Cloud fraction for the upper cloud layer (0-100)
cloud_fraction (float or ndarray) – Total cloud fraction (0-100)
- Returns:
Property weighted by cloud fraction and summed across layers, or np.nan if no valid data or zero total cloud fraction
- Return type:
float or ndarray
Notes
- The weighting formula is:
- result = (property_lower * cloud_fraction_lower / cloud_fraction) +
(property_upper * cloud_fraction_upper / cloud_fraction)
Returns NaN when: - Total cloud fraction is zero or NaN - Both layers have invalid (NaN) property values - All cloud fractions are NaN
Examples
>>> calculate_cloud_fraction_weighted_property_for_layer( ... property_lower=10.0, property_upper=20.0, ... cloud_fraction_lower=30.0, cloud_fraction_upper=70.0, ... cloud_fraction=100.0 ... ) 17.0 # (10*30 + 20*70)/100
- libera_utils.scene_id.calculate_cloud_phase(cloud_phase_lower: float | ndarray[Any, dtype[floating]], cloud_phase_upper: float | ndarray[Any, dtype[floating]], cloud_fraction_lower: float | ndarray[Any, dtype[floating]], cloud_fraction_upper: float | ndarray[Any, dtype[floating]], cloud_fraction: float | ndarray[Any, dtype[floating]]) float | ndarray[Any, dtype[floating]]#
Calculate weighted cloud phase from upper and lower cloud layers.
Computes the dominant cloud phase by weighting each layer’s phase by its cloud fraction contribution and rounding to the nearest integer phase classification (1=liquid, 2=ice).
- Parameters:
cloud_phase_lower (float or ndarray) – Cloud phase for lower layer (1=liquid, 2=ice)
cloud_phase_upper (float or ndarray) – Cloud phase for upper layer (1=liquid, 2=ice)
cloud_fraction_lower (float or ndarray) – Cloud fraction for lower layer (0-100)
cloud_fraction_upper (float or ndarray) – Cloud fraction for upper layer (0-100)
cloud_fraction (float or ndarray) – Total cloud fraction (0-100)
- Returns:
Cloud phase weighted by cloud fraction and rounded to nearest integer (1=liquid, 2=ice), or np.nan if no valid data
- Return type:
float or ndarray
Notes
The weighted average is rounded to the nearest integer to provide a discrete phase classification. Values between 1 and 2 are rounded to either 1 or 2, with 1.5 rounding to 2.
Examples
>>> cloud_phase(1.0, 2.0, 30.0, 70.0, 100.0) 2.0 # (1*30 + 2*70)/100 = 1.7, rounds to 2 >>> cloud_phase(1.0, 1.0, 50.0, 50.0, 100.0) 1.0 # All liquid
- libera_utils.scene_id.calculate_surface_wind(surface_wind_u: float | ndarray[Any, dtype[floating]], surface_wind_v: float | ndarray[Any, dtype[floating]]) float | ndarray[Any, dtype[floating]]#
Calculate total surface wind speed from u and v vector components.
- Parameters:
- Returns:
Total wind speed magnitude (m/s), or np.nan where input components are NaN
- Return type:
float or ndarray
Notes
Wind speed is calculated using the Pythagorean theorem: sqrt(u^2 + v^2). NaN values in either component result in NaN output for that position.
Examples
>>> surface_wind(3.0, 4.0) 5.0 >>> surface_wind(np.array([3, np.nan]), np.array([4, 5])) array([5., nan])
- libera_utils.scene_id.calculate_trmm_surface_type(igbp_surface_type: int | ndarray[Any, dtype[integer]]) int | ndarray[Any, dtype[integer]]#
Convert TRMM surface type to IGBP surface type classification.
- Parameters:
igbp_surface_type (int or ndarray of int) – IGBP surface type codes
- Returns:
TRMM surface type codes
- Return type:
- Raises:
ValueError – If any input values cannot be converted to a valid IGBP surface type
Notes
The conversion uses a lookup table derived from the TRMMSurfaceType.value property. Values that don’t correspond to valid TRMM surface types will raise a ValueError.
Examples
>>> calculate_trmm_surface_type(1) 5 # Maps IGBP HI_SHRUB back to TRMM type 5 >>> calculate_trmm_surface_type(np.array([1, 0])) array([5, 17]) >>> calculate_trmm_surface_type(999) ValueError: Cannot convert IGBP surface type value(s) to TRMM surface type: [999]