libera_utils.logutil.JsonLogEncoder#

class libera_utils.logutil.JsonLogEncoder(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)#

Bases: JSONEncoder

Custom JSON encoder for logging that can handle arbitrary Python objects.

This encoder is designed to be absolutely robust and never raise exceptions, since it runs inside logging infrastructure. It handles: - datetime/date objects → ISO format strings - Non-string dictionary keys → converted to strings - Nested structures with arbitrary objects - Any other Python object → repr() fallback

Methods

default(o)

Fallback for objects that make it past preprocessing.

encode(o)

Override encode to preprocess the entire object tree before JSON serialization.

iterencode(o[, _one_shot])

Encode the given object and yield each string representation as available.

__init__(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)#

Constructor for JSONEncoder, with sensible defaults.

If skipkeys is false, then it is a TypeError to attempt encoding of keys that are not str, int, float or None. If skipkeys is True, such items are simply skipped.

If ensure_ascii is true, the output is guaranteed to be str objects with all incoming non-ASCII characters escaped. If ensure_ascii is false, the output can contain non-ASCII characters.

If check_circular is true, then lists, dicts, and custom encoded objects will be checked for circular references during encoding to prevent an infinite recursion (which would cause an RecursionError). Otherwise, no such check takes place.

If allow_nan is true, then NaN, Infinity, and -Infinity will be encoded as such. This behavior is not JSON specification compliant, but is consistent with most JavaScript based encoders and decoders. Otherwise, it will be a ValueError to encode such floats.

If sort_keys is true, then the output of dictionaries will be sorted by key; this is useful for regression tests to ensure that JSON serializations can be compared on a day-to-day basis.

If indent is a non-negative integer, then JSON array elements and object members will be pretty-printed with that indent level. An indent level of 0 will only insert newlines. None is the most compact representation.

If specified, separators should be an (item_separator, key_separator) tuple. The default is (’, ‘, ‘: ‘) if indent is None and (‘,’, ‘: ‘) otherwise. To get the most compact JSON representation, you should specify (‘,’, ‘:’) to eliminate whitespace.

If specified, default is a function that gets called for objects that can’t otherwise be serialized. It should return a JSON encodable version of the object or raise a TypeError.

Methods

default(o)

Fallback for objects that make it past preprocessing.

encode(o)

Override encode to preprocess the entire object tree before JSON serialization.

Attributes

item_separator

key_separator

_preprocess(o: Any, _depth: int = 0) Any#

Recursively preprocess objects to make them JSON-serializable.

  • Converts non-string dict keys to strings

  • Recursively processes nested dicts and lists

  • Converts datetime/date objects to ISO format

  • Uses repr() for any other non-serializable objects

  • Prevents infinite recursion with depth limit

Parameters:
  • o (Any) – Object to preprocess

  • _depth (int) – Internal recursion depth counter (default 0)

_serialize_key(key: Any) str | int | float | bool | None#

Convert a dictionary key to a string.

Handles datetime/date keys specially to use ISO format.

default(o: Any) str#

Fallback for objects that make it past preprocessing.

This should rarely be called since encode() preprocesses everything, but we keep it as an additional safety layer.

encode(o: Any) str#

Override encode to preprocess the entire object tree before JSON serialization.

This is necessary to handle non-string dictionary keys, which json.dumps cannot handle even with a custom default() method.