Skip to content

process_packets Script

archive/process_packets.py drives the data-preparation stage of the pipeline. It converts raw GIF animations into the binary-safe RGB565 packet format expected by the lighting rig hardware.

Pipeline Overview

GIF file
  └─► gif_to_bmp()          – extract frames, resize to 16 × 16, save as BMP
        └─► bmp_to_hex_values()  – convert each BMP to a list of RGB565 hex strings
              └─► save_packets_to_files()  – split hex values into fixed-size packets
                    ├─ writes  *_packet_NNNNN.txt  (one per packet)
                    ├─ writes  *_processed.txt     (master concatenated file)
                    └─ writes  *_meta.json         (frame/packet counts)
              └─► group_packets_into_chunks()  – move packets into chunk sub-folders

Packet Format

Each packet file contains a single line structured as:

<5-digit number><8-digit CRC32 hex><3-digit length>@<hex data>![?]
  • The ? terminator appears only on the final packet of a transmission.
  • CRC32 is calculated over the raw hex data string (before the header is prepended).

Usage

python archive/process_packets.py --input path/to/animation.gif --output path/to/output/

Options

Flag Default Description
--input (required) Path to a GIF file or a directory of GIFs.
--output (required) Output directory for packet and metadata files.
--packet-size 120 Number of RGB565 hex values per packet.
--chunk-size 100 Number of packet files per chunk sub-folder.

Constants

Name Value Description
FRAME_SIZE (16, 16) Target pixel dimensions for each frame.
DEFAULT_PACKET_SIZE 120 Default number of hex values per packet.
DEFAULT_CHUNK_SIZE 100 Default packet files per chunk folder.
PREVIEW_SCALE 16 Scale factor for the sharp root-level preview GIF.

API Reference

archive.process_packets

GIF processing pipeline: converts GIF animations into RGB565 packet files.

This module drives the data-preparation stage of the pipeline:

  1. Each GIF frame is resized to 16 × 16 pixels and saved as a BMP.
  2. BMP pixel data is converted to RGB565 hex values.
  3. Hex values are concatenated and split into fixed-size packets, each with a CRC32 checksum header.
  4. Packets are written to individual .txt files and then grouped into chunk sub-folders.
  5. A *_meta.json file and a preview GIF are produced alongside the packets.

The module can be run directly or called programmatically via :func:run_processing.

bmp_to_hex_values(input_bmp)

Convert a BMP frame to a list of RGB565 hex strings.

Each pixel is encoded as a four-character uppercase hex string representing a 16-bit RGB565 value (5 bits red, 6 bits green, 5 bits blue).

Parameters:

Name Type Description Default
input_bmp Path

Path to the BMP file to convert.

required

Returns:

Type Description
list[str]

A list of four-character hex strings, one per pixel, in row-major

list[str]

order.

Source code in archive/process_packets.py
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
def bmp_to_hex_values(input_bmp: Path) -> list[str]:
    """Convert a BMP frame to a list of RGB565 hex strings.

    Each pixel is encoded as a four-character uppercase hex string representing
    a 16-bit RGB565 value (5 bits red, 6 bits green, 5 bits blue).

    Args:
        input_bmp: Path to the BMP file to convert.

    Returns:
        A list of four-character hex strings, one per pixel, in row-major
        order.
    """
    bmp = Image.open(input_bmp).convert("RGB").resize(FRAME_SIZE, Image.Resampling.NEAREST)
    hex_values: list[str] = []
    for r, g, b in bmp.getdata():
        r5 = int((r * 31) / 255)
        g6 = int((g * 63) / 255)
        b5 = int((b * 31) / 255)
        rgb565 = (r5 << 11) | (g6 << 5) | b5
        hex_values.append(f"{rgb565:04X}")
    return hex_values

calculate_crc32(data)

Calculate a masked CRC32 checksum for a UTF-8 string.

Parameters:

Name Type Description Default
data str

The string to checksum.

required

Returns:

Type Description
int

An unsigned 32-bit integer checksum.

Source code in archive/process_packets.py
36
37
38
39
40
41
42
43
44
45
def calculate_crc32(data: str) -> int:
    """Calculate a masked CRC32 checksum for a UTF-8 string.

    Args:
        data: The string to checksum.

    Returns:
        An unsigned 32-bit integer checksum.
    """
    return zlib.crc32(data.encode("utf-8")) & 0xFFFFFFFF

create_packet(packet_number, packet_data, total_packets)

Assemble a complete packet string including header, data, and terminator.

The final packet in a transmission is suffixed with "?" to signal the end of the stream to the receiver.

Parameters:

Name Type Description Default
packet_number int

Zero-based index of the packet.

required
packet_data str

The hex-value payload string for this packet.

required
total_packets int

Total number of packets in the transmission.

required

Returns:

Type Description
str

The fully assembled packet string.

Source code in archive/process_packets.py
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
def create_packet(packet_number: int, packet_data: str, total_packets: int) -> str:
    """Assemble a complete packet string including header, data, and terminator.

    The final packet in a transmission is suffixed with ``"?"`` to signal the
    end of the stream to the receiver.

    Args:
        packet_number: Zero-based index of the packet.
        packet_data: The hex-value payload string for this packet.
        total_packets: Total number of packets in the transmission.

    Returns:
        The fully assembled packet string.
    """
    transmission_end = "?" if packet_number == total_packets - 1 else ""
    return f"{generate_packet_header(packet_number, packet_data)}{packet_data}!{transmission_end}"

extract_packet_number(filename)

Extract the numeric index from a packet filename.

Parameters:

Name Type Description Default
filename str

The filename string, e.g. "anim_packet_00003.txt".

required

Returns:

Type Description
int

The extracted integer, or -1 if no number is found.

Source code in archive/process_packets.py
289
290
291
292
293
294
295
296
297
298
299
300
301
def extract_packet_number(filename: str) -> int:
    """Extract the numeric index from a packet filename.

    Args:
        filename: The filename string, e.g. ``"anim_packet_00003.txt"``.

    Returns:
        The extracted integer, or ``-1`` if no number is found.
    """
    match = re.search(r"(\d+)", filename)
    if match:
        return int(match.group(1))
    return -1

generate_packet_header(packet_number, packet_data)

Build the fixed-width header for a single packet.

The header format is::

<5-digit packet number><8-digit CRC32 hex><3-digit length>@

Parameters:

Name Type Description Default
packet_number int

Zero-based index of the packet.

required
packet_data str

The raw hex-value string that will follow the header.

required

Returns:

Type Description
str

The formatted header string.

Source code in archive/process_packets.py
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
def generate_packet_header(packet_number: int, packet_data: str) -> str:
    """Build the fixed-width header for a single packet.

    The header format is::

        <5-digit packet number><8-digit CRC32 hex><3-digit length>@

    Args:
        packet_number: Zero-based index of the packet.
        packet_data: The raw hex-value string that will follow the header.

    Returns:
        The formatted header string.
    """
    checksum = calculate_crc32(packet_data)
    packet_length = len(packet_data)
    return f"{packet_number:05d}{checksum:08X}{packet_length:03d}@"

gif_to_bmp(input_gif, output_folder)

Extract and resize each frame of a GIF, saving them as BMP files.

Also saves two preview GIFs to disk:

  • A 16 × 16 preview alongside the packet files.
  • A sharp upscaled preview (256 × 256) at the project root for visual inspection.

Parameters:

Name Type Description Default
input_gif Path

Path to the source GIF file.

required
output_folder Path

Directory in which to write the BMP frame files and the 16 × 16 preview GIF.

required

Returns:

Type Description
list[Path]

An ordered list of paths to the generated BMP files.

Source code in archive/process_packets.py
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
def gif_to_bmp(input_gif: Path, output_folder: Path) -> list[Path]:
    """Extract and resize each frame of a GIF, saving them as BMP files.

    Also saves two preview GIFs to disk:

    - A 16 × 16 preview alongside the packet files.
    - A sharp upscaled preview (256 × 256) at the project root for visual
      inspection.

    Args:
        input_gif: Path to the source GIF file.
        output_folder: Directory in which to write the BMP frame files and the
            16 × 16 preview GIF.

    Returns:
        An ordered list of paths to the generated BMP files.
    """
    gif = Image.open(input_gif)
    gif_base_name = input_gif.stem
    resized_frames: list[Image.Image] = []
    durations: list[int] = []
    bmp_paths: list[Path] = []

    for index, frame in enumerate(ImageSequence.Iterator(gif)):
        delay = frame.info.get("duration", 0)
        print(f"Frame {index}: Delay {delay} ms")
        resized_frame = frame.convert("RGB").resize(FRAME_SIZE, Image.Resampling.NEAREST)
        bmp_path = output_folder / f"{gif_base_name}_frame_{index:0{FRAME_FILE_DIGITS}d}.bmp"
        resized_frame.save(bmp_path, "BMP")
        bmp_paths.append(bmp_path)
        resized_frames.append(resized_frame)
        durations.append(delay)

    if resized_frames:
        small_gif_path = output_folder / f"{gif_base_name}_16x16.gif"
        first_frame, *other_frames = resized_frames
        first_frame.save(
            small_gif_path,
            save_all=True,
            append_images=other_frames,
            loop=0,
            duration=durations or None,
            disposal=2,
        )
        print(f"Saved 16x16 GIF preview to: {small_gif_path}")

        # Create an upscaled preview with hard pixel edges for visual inspection.
        preview_size = (FRAME_SIZE[0] * PREVIEW_SCALE, FRAME_SIZE[1] * PREVIEW_SCALE)
        sharp_frames = [frame.resize(preview_size, Image.Resampling.NEAREST) for frame in resized_frames]
        root_preview_path = PROJECT_ROOT / f"{gif_base_name}_preview_sharp.gif"
        sharp_first_frame, *sharp_other_frames = sharp_frames
        sharp_first_frame.save(
            root_preview_path,
            save_all=True,
            append_images=sharp_other_frames,
            loop=0,
            duration=durations or None,
            disposal=2,
        )
        print(f"Saved root sharp preview to: {root_preview_path}")

    return bmp_paths

group_packets_into_chunks(source_folder, chunk_size=DEFAULT_CHUNK_SIZE)

Organise packet files into numbered chunk<N> sub-folders.

Packet files are sorted by their embedded packet number and then moved into sub-folders of chunk_size files each. This is useful for reducing the number of files in a single directory for large GIFs.

Parameters:

Name Type Description Default
source_folder Path

Directory containing the *_packet_*.txt files.

required
chunk_size int

Maximum number of packet files per chunk folder. Defaults to :data:DEFAULT_CHUNK_SIZE.

DEFAULT_CHUNK_SIZE
Source code in archive/process_packets.py
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
def group_packets_into_chunks(source_folder: Path, chunk_size: int = DEFAULT_CHUNK_SIZE) -> None:
    """Organise packet files into numbered ``chunk<N>`` sub-folders.

    Packet files are sorted by their embedded packet number and then moved into
    sub-folders of ``chunk_size`` files each.  This is useful for reducing the
    number of files in a single directory for large GIFs.

    Args:
        source_folder: Directory containing the ``*_packet_*.txt`` files.
        chunk_size: Maximum number of packet files per chunk folder.
            Defaults to :data:`DEFAULT_CHUNK_SIZE`.
    """
    files = [
        path
        for path in source_folder.iterdir()
        if path.is_file() and "_packet_" in path.name
    ]
    files.sort(key=lambda path: extract_packet_number(path.name))

    total_files = len(files)
    if total_files == 0:
        return

    total_chunks = (total_files // chunk_size) + (1 if total_files % chunk_size > 0 else 0)
    for chunk_num in range(total_chunks):
        chunk_folder = source_folder / f"chunk{chunk_num + 1}"
        chunk_folder.mkdir(exist_ok=True)
        start_index = chunk_num * chunk_size
        end_index = min((chunk_num + 1) * chunk_size, total_files)
        for file_path in files[start_index:end_index]:
            destination_path = chunk_folder / file_path.name
            shutil.move(str(file_path), str(destination_path))
        print(f"Moved {end_index - start_index} files to {chunk_folder}")

iter_gif_files(input_path)

Collect GIF files from a single file path or a directory.

Preview GIFs (files ending in _16x16.gif) are excluded automatically.

Parameters:

Name Type Description Default
input_path Path

Path to either a single GIF file or a directory of GIFs.

required

Returns:

Type Description
list[Path]

A sorted list of GIF file paths. Returns an empty list if the input

list[Path]

path is neither a valid GIF nor a directory.

Source code in archive/process_packets.py
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
def iter_gif_files(input_path: Path) -> list[Path]:
    """Collect GIF files from a single file path or a directory.

    Preview GIFs (files ending in ``_16x16.gif``) are excluded automatically.

    Args:
        input_path: Path to either a single GIF file or a directory of GIFs.

    Returns:
        A sorted list of GIF file paths.  Returns an empty list if the input
        path is neither a valid GIF nor a directory.
    """
    if input_path.is_file():
        return [input_path] if input_path.suffix.lower() == ".gif" and not input_path.name.endswith("_16x16.gif") else []

    if input_path.is_dir():
        return sorted(
            [
                path
                for path in input_path.iterdir()
                if path.is_file() and path.suffix.lower() == ".gif" and not path.name.endswith("_16x16.gif")
            ]
        )

    return []

process_gif(gif_path, output_folder_path, packet_size)

Run the full processing pipeline for a single GIF file.

Orchestrates frame extraction, hex conversion, packet generation, BMP clean-up, and metadata writing for one GIF.

Parameters:

Name Type Description Default
gif_path Path

Path to the source GIF file.

required
output_folder_path Path

Directory in which to write all outputs.

required
packet_size int

Number of hex values per packet.

required

Returns:

Type Description
Path

Path to the generated *_meta.json metadata file.

Source code in archive/process_packets.py
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
def process_gif(gif_path: Path, output_folder_path: Path, packet_size: int) -> Path:
    """Run the full processing pipeline for a single GIF file.

    Orchestrates frame extraction, hex conversion, packet generation, BMP
    clean-up, and metadata writing for one GIF.

    Args:
        gif_path: Path to the source GIF file.
        output_folder_path: Directory in which to write all outputs.
        packet_size: Number of hex values per packet.

    Returns:
        Path to the generated ``*_meta.json`` metadata file.
    """
    gif_base_name = gif_path.stem
    bmp_files = gif_to_bmp(gif_path, output_folder_path)
    frame_index_pattern = re.compile(r"_frame_(\d{3})\.bmp")
    sorted_bmps = sorted(
        bmp_files,
        key=lambda path: int(frame_index_pattern.search(path.name).group(1)) if frame_index_pattern.search(path.name) else 0,
    )

    hex_values_list: list[list[str]] = [bmp_to_hex_values(bmp_file) for bmp_file in sorted_bmps]
    master_file_path = output_folder_path / f"{gif_base_name}_processed.txt"
    total_packets = save_packets_to_files(hex_values_list, output_folder_path, gif_base_name, packet_size, master_file_path)
    total_packets -= 1
    metadata_path = write_metadata(output_folder_path, gif_base_name, len(hex_values_list), total_packets)
    remove_remaining_files(output_folder_path, gif_base_name)
    return metadata_path

remove_remaining_files(output_folder_path, gif_base_name)

Delete the intermediate BMP frame files after packet generation.

Parameters:

Name Type Description Default
output_folder_path Path

Directory containing the BMP files to remove.

required
gif_base_name str

Base name of the GIF whose BMP files should be deleted.

required
Source code in archive/process_packets.py
218
219
220
221
222
223
224
225
226
227
228
229
def remove_remaining_files(output_folder_path: Path, gif_base_name: str) -> None:
    """Delete the intermediate BMP frame files after packet generation.

    Args:
        output_folder_path: Directory containing the BMP files to remove.
        gif_base_name: Base name of the GIF whose BMP files should be deleted.
    """
    bmp_pattern = re.compile(fr"{re.escape(gif_base_name)}_frame_\d{{{FRAME_FILE_DIGITS}}}\.bmp")
    for file_path in output_folder_path.iterdir():
        if file_path.is_file() and bmp_pattern.match(file_path.name):
            file_path.unlink()
            print(f"Removed .bmp file: {file_path}")

run_processing(input_path, output_path, packet_size=DEFAULT_PACKET_SIZE, chunk_size=DEFAULT_CHUNK_SIZE)

Process one or more GIF files and return paths to the generated metadata.

This is the primary entry point for programmatic use of the pipeline. It processes every GIF found at input_path, then groups the resulting packet files into chunk sub-folders.

Parameters:

Name Type Description Default
input_path Path

Path to a single GIF file or a directory containing GIFs.

required
output_path Path

Directory in which to write all processed outputs. Created automatically if it does not exist.

required
packet_size int

Number of RGB565 hex values per packet. Defaults to :data:DEFAULT_PACKET_SIZE.

DEFAULT_PACKET_SIZE
chunk_size int

Number of packet files per chunk sub-folder. Defaults to :data:DEFAULT_CHUNK_SIZE.

DEFAULT_CHUNK_SIZE

Raises:

Type Description
FileNotFoundError

If no valid GIF files are found at input_path.

Returns:

Type Description
list[Path]

A list of paths to the generated *_meta.json metadata files, one

list[Path]

per processed GIF.

Source code in archive/process_packets.py
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
def run_processing(
    input_path: Path,
    output_path: Path,
    packet_size: int = DEFAULT_PACKET_SIZE,
    chunk_size: int = DEFAULT_CHUNK_SIZE,
) -> list[Path]:
    """Process one or more GIF files and return paths to the generated metadata.

    This is the primary entry point for programmatic use of the pipeline.
    It processes every GIF found at ``input_path``, then groups the resulting
    packet files into chunk sub-folders.

    Args:
        input_path: Path to a single GIF file or a directory containing GIFs.
        output_path: Directory in which to write all processed outputs.
            Created automatically if it does not exist.
        packet_size: Number of RGB565 hex values per packet.
            Defaults to :data:`DEFAULT_PACKET_SIZE`.
        chunk_size: Number of packet files per chunk sub-folder.
            Defaults to :data:`DEFAULT_CHUNK_SIZE`.

    Raises:
        FileNotFoundError: If no valid GIF files are found at ``input_path``.

    Returns:
        A list of paths to the generated ``*_meta.json`` metadata files, one
        per processed GIF.
    """
    output_path.mkdir(parents=True, exist_ok=True)
    gif_files = iter_gif_files(input_path)
    if not gif_files:
        raise FileNotFoundError(f"No valid GIF files found at: {input_path}")

    metadata_paths: list[Path] = []
    for gif_path in gif_files:
        metadata_paths.append(process_gif(gif_path, output_path, packet_size))

    group_packets_into_chunks(output_path, chunk_size)
    return metadata_paths

save_packets_to_files(hex_values_list, output_folder_path, gif_base_name, packet_size, master_file_path)

Serialise RGB565 hex values into numbered packet files.

All per-frame hex value lists are flattened into a single sequence, then split into fixed-size packets. Each packet is written to its own *_packet_NNNNN.txt file, and the raw hex data is also appended to a master *_processed.txt file.

Parameters:

Name Type Description Default
hex_values_list list[list[str]]

A list where each element is the list of hex strings for one GIF frame.

required
output_folder_path Path

Directory in which to write the packet files.

required
gif_base_name str

Base name of the source GIF (used in file naming).

required
packet_size int

Number of hex values per packet.

required
master_file_path Path

Path of the master output file that aggregates all raw hex data.

required

Returns:

Type Description
int

The total number of packets written.

Source code in archive/process_packets.py
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
def save_packets_to_files(
    hex_values_list: list[list[str]],
    output_folder_path: Path,
    gif_base_name: str,
    packet_size: int,
    master_file_path: Path,
) -> int:
    """Serialise RGB565 hex values into numbered packet files.

    All per-frame hex value lists are flattened into a single sequence, then
    split into fixed-size packets.  Each packet is written to its own
    ``*_packet_NNNNN.txt`` file, and the raw hex data is also appended to a
    master ``*_processed.txt`` file.

    Args:
        hex_values_list: A list where each element is the list of hex strings
            for one GIF frame.
        output_folder_path: Directory in which to write the packet files.
        gif_base_name: Base name of the source GIF (used in file naming).
        packet_size: Number of hex values per packet.
        master_file_path: Path of the master output file that aggregates all
            raw hex data.

    Returns:
        The total number of packets written.
    """
    output_folder_path.mkdir(parents=True, exist_ok=True)
    all_hex_values = [hex_value for hex_values in hex_values_list for hex_value in hex_values]
    total_packets = len(all_hex_values) // packet_size + (1 if len(all_hex_values) % packet_size > 0 else 0)
    master_file_path.write_text("")

    packet_index = 0
    for i in range(0, len(all_hex_values), packet_size):
        packet_data = "".join(all_hex_values[i : i + packet_size])
        packet = create_packet(packet_index, packet_data, total_packets)
        packet_file_path = output_folder_path / f"{gif_base_name}_packet_{packet_index:0{PACKET_FILE_DIGITS}d}.txt"
        packet_file_path.write_text(packet)
        with master_file_path.open("a") as master_file:
            master_file.write(packet_data)
        print(f"Saved packet {packet_index + 1} to: {packet_file_path}")
        packet_index += 1

    return total_packets

write_metadata(output_folder_path, gif_base_name, num_frames, total_packets)

Write a JSON metadata file summarising the processed GIF.

Parameters:

Name Type Description Default
output_folder_path Path

Directory in which to write the metadata file.

required
gif_base_name str

Base name of the source GIF.

required
num_frames int

Total number of frames extracted from the GIF.

required
total_packets int

Total number of packets generated.

required

Returns:

Type Description
Path

Path to the written *_meta.json file.

Source code in archive/process_packets.py
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
def write_metadata(output_folder_path: Path, gif_base_name: str, num_frames: int, total_packets: int) -> Path:
    """Write a JSON metadata file summarising the processed GIF.

    Args:
        output_folder_path: Directory in which to write the metadata file.
        gif_base_name: Base name of the source GIF.
        num_frames: Total number of frames extracted from the GIF.
        total_packets: Total number of packets generated.

    Returns:
        Path to the written ``*_meta.json`` file.
    """
    metadata = {
        "gif_name": gif_base_name,
        "num_frames": num_frames,
        "num_packets": total_packets,
        "creator": "",
        "description": "",
    }
    meta_path = output_folder_path / f"{gif_base_name}_meta.json"
    with meta_path.open("w") as file_handle:
        json.dump(metadata, file_handle, indent=2)
    print(f"Wrote metadata to: {meta_path}")
    return meta_path