Numcodecs

Numcodecs is a Python package providing buffer compression and transformation codecs for use in data storage and communication applications. These include:

  • Compression codecs, e.g., Zlib, BZ2, LZMA, ZFPY and Blosc.

  • Pre-compression filters, e.g., Delta, Quantize, FixedScaleOffset, PackBits, Categorize.

  • Integrity checks, e.g., CRC32, Adler32.

All codecs implement the same API, allowing codecs to be organized into pipelines in a variety of ways.

If you have a question, find a bug, would like to make a suggestion or contribute code, please raise an issue on GitHub.

Installation

Numcodecs depends on NumPy. It is generally best to install NumPy first using whatever method is most appropriate for you operating system and Python distribution.

Install from PyPI:

$ pip install numcodecs

Alternatively, install via conda:

$ conda install -c conda-forge numcodecs

Numcodecs includes a C extension providing integration with the Blosc library. Wheels are available for most platforms.

Installing a wheel or via conda will install a pre-compiled binary distribution. However, if you have a newer CPU that supports the AVX2 instruction set (e.g., Intel Haswell, Broadwell or Skylake) then installing via pip is preferable, because this will compile the Blosc library from source with optimisations for AVX2.

Note that if you compile the C extensions on a machine with AVX2 support you probably then cannot use the same binaries on a machine without AVX2. To disable compilation with AVX2 support regardless of the machine architecture:

$ export DISABLE_NUMCODECS_AVX2=
$ pip install -v --no-cache-dir --no-binary numcodecs numcodecs

To work with Numcodecs source code in development, install from GitHub:

$ git clone --recursive https://github.com/zarr-developers/numcodecs.git
$ cd numcodecs
$ python setup.py install

To verify that Numcodecs has been fully installed (including the Blosc extension) run the test suite:

$ pip install nose
$ python -m nose -v numcodecs

Contents

Codec API

This module defines the Codec base class, a common interface for all codec classes.

Codec classes must implement Codec.encode() and Codec.decode() methods. Inputs to and outputs from these methods may be any Python object exporting a contiguous buffer via the new-style Python protocol.

Codec classes must implement a Codec.get_config() method, which must return a dictionary holding all configuration parameters required to enable encoding and decoding of data. The expectation is that these configuration parameters will be stored or communicated separately from encoded data, and thus the codecs do not need to store all encoding parameters within the encoded data. For broad compatibility, the configuration object must contain only JSON-serializable values. The configuration object must also contain an ‘id’ field storing the codec identifier (see below).

Codec classes must implement a Codec.from_config() class method, which will return an instance of the class initialized from a configuration object.

Finally, codec classes must set a codec_id class-level attribute. This must be a string. Two different codec classes may set the same value for the codec_id attribute if and only if they are fully compatible, meaning that (1) configuration parameters are the same, and (2) given the same configuration, one class could correctly decode data encoded by the other and vice versa.

class numcodecs.abc.Codec[source]

Codec abstract base class.

codec_id = None

Codec identifier.

abstract encode(buf)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

abstract decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()[source]

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)[source]

Instantiate codec from a configuration object.

Codec registry

The registry module provides some simple convenience functions to enable applications to dynamically register and look-up codec classes.

numcodecs.registry.get_codec(config)[source]

Obtain a codec for the given configuration.

Parameters
configdict-like

Configuration object.

Returns
codecCodec

Examples

>>> import numcodecs as codecs
>>> codec = codecs.get_codec(dict(id='zlib', level=1))
>>> codec
Zlib(level=1)
numcodecs.registry.register_codec(cls, codec_id=None)[source]

Register a codec class.

Parameters
clsCodec class

Notes

This function maintains a mapping from codec identifiers to codec classes. When a codec class is registered, it will replace any class previously registered under the same codec identifier, if present.

Blosc

class numcodecs.blosc.Blosc(cname='lz4', clevel=5, shuffle=1, blocksize=0)

Codec providing compression using the Blosc meta-compressor.

Parameters
cnamestring, optional

A string naming one of the compression algorithms available within blosc, e.g., ‘zstd’, ‘blosclz’, ‘lz4’, ‘lz4hc’, ‘zlib’ or ‘snappy’.

clevelinteger, optional

An integer between 0 and 9 specifying the compression level.

shuffleinteger, optional

Either NOSHUFFLE (0), SHUFFLE (1), BITSHUFFLE (2) or AUTOSHUFFLE (-1). If AUTOSHUFFLE, bit-shuffle will be used for buffers with itemsize 1, and byte-shuffle will be used otherwise. The default is SHUFFLE.

blocksizeint

The requested size of the compressed blocks. If 0 (default), an automatic blocksize will be used.

codec_id = 'blosc'

Codec identifier.

NOSHUFFLE = 0
SHUFFLE = 1
BITSHUFFLE = 2
AUTOSHUFFLE = -1
encode(self, buf)
decode(self, buf, out=None)
get_config()

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

decode_partial(self, buf, int start, int nitems, out=None)

Experimental

Helper functions

numcodecs.blosc.init()

Initialize the Blosc library environment.

numcodecs.blosc.destroy()

Destroy the Blosc library environment.

numcodecs.blosc.compname_to_compcode(cname)

Return the compressor code associated with the compressor name. If the compressor name is not recognized, or there is not support for it in this build, -1 is returned instead.

numcodecs.blosc.list_compressors()

Get a list of compressors supported in the current build.

numcodecs.blosc.get_nthreads()

Get the number of threads that Blosc uses internally for compression and decompression.

numcodecs.blosc.set_nthreads(int nthreads)

Set the number of threads that Blosc uses internally for compression and decompression.

numcodecs.blosc.cbuffer_sizes(source)

Return information about a compressed buffer, namely the number of uncompressed bytes (nbytes) and compressed (cbytes). It also returns the blocksize (which is used internally for doing the compression by blocks).

Returns
nbytesint
cbytesint
blocksizeint
numcodecs.blosc.cbuffer_complib(source)

Return the name of the compression library used to compress source.

numcodecs.blosc.cbuffer_metainfo(source)

Return some meta-information about the compressed buffer in source, including the typesize, whether the shuffle or bit-shuffle filters were used, and the whether the buffer was memcpyed.

Returns
typesize
shuffle
memcpyed
numcodecs.blosc.compress(source, char *cname, int clevel, int shuffle=SHUFFLE, int blocksize=AUTOBLOCKS)

Compress data.

Parameters
sourcebytes-like

Data to be compressed. Can be any object supporting the buffer protocol.

cnamebytes

Name of compression library to use.

clevelint

Compression level.

shuffleint

Either NOSHUFFLE (0), SHUFFLE (1), BITSHUFFLE (2) or AUTOSHUFFLE (-1). If AUTOSHUFFLE, bit-shuffle will be used for buffers with itemsize 1, and byte-shuffle will be used otherwise. The default is SHUFFLE.

blocksizeint

The requested size of the compressed blocks. If 0, an automatic blocksize will be used.

Returns
destbytes

Compressed data.

numcodecs.blosc.decompress(source, dest=None)

Decompress data.

Parameters
sourcebytes-like

Compressed data, including blosc header. Can be any object supporting the buffer protocol.

destarray-like, optional

Object to decompress into.

Returns
destbytes

Object containing decompressed data.

numcodecs.blosc.decompress_partial(source, start, nitems, dest=None)

Experimental Decompress data of only a part of a buffer.

Parameters
sourcebytes-like

Compressed data, including blosc header. Can be any object supporting the buffer protocol.

start: int,

Offset in item where we want to start decoding

nitems: int

Number of items we want to decode

destarray-like, optional

Object to decompress into.

Returns
destbytes

Object containing decompressed data.

LZ4

class numcodecs.lz4.LZ4(acceleration=1)

Codec providing compression using LZ4.

Parameters
accelerationint

Acceleration level. The larger the acceleration value, the faster the algorithm, but also the lesser the compression.

codec_id = 'lz4'

Codec identifier.

encode(self, buf)
decode(self, buf, out=None)
get_config()

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

Helper functions

numcodecs.lz4.compress(source, int acceleration=DEFAULT_ACCELERATION)

Compress data.

Parameters
sourcebytes-like

Data to be compressed. Can be any object supporting the buffer protocol.

accelerationint

Acceleration level. The larger the acceleration value, the faster the algorithm, but also the lesser the compression.

Returns
destbytes

Compressed data.

Notes

The compressed output includes a 4-byte header storing the original size of the decompressed data as a little-endian 32-bit integer.

numcodecs.lz4.decompress(source, dest=None)

Decompress data.

Parameters
sourcebytes-like

Compressed data. Can be any object supporting the buffer protocol.

destarray-like, optional

Object to decompress into.

Returns
destbytes

Object containing decompressed data.

ZFPY

class numcodecs.zfpy.ZFPY(mode=4, tolerance=- 1, rate=- 1, precision=- 1, compression_kwargs=None)[source]

Codec providing compression using zfpy via the Python standard library.

Parameters
modeinteger

One of the zfpy mode choice, e.g., zfpy.mode_fixed_accuracy.

tolerancedouble, optional

A double-precision number, specifying the compression accuracy needed.

ratedouble, optional

A double-precision number, specifying the compression rate needed.

precisionint, optional

A integer number, specifying the compression precision needed.

codec_id = 'zfpy'

Codec identifier.

encode(buf)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

Zstd

class numcodecs.zstd.Zstd(level=1)

Codec providing compression using Zstandard.

Parameters
levelint

Compression level (1-22).

codec_id = 'zstd'

Codec identifier.

encode(self, buf)
decode(self, buf, out=None)
get_config()

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

Helper functions

numcodecs.zstd.compress(source, int level=DEFAULT_CLEVEL)

Compress data.

Parameters
sourcebytes-like

Data to be compressed. Can be any object supporting the buffer protocol.

levelint

Compression level (1-22).

Returns
destbytes

Compressed data.

numcodecs.zstd.decompress(source, dest=None)

Decompress data.

Parameters
sourcebytes-like

Compressed data. Can be any object supporting the buffer protocol.

destarray-like, optional

Object to decompress into.

Returns
destbytes

Object containing decompressed data.

Zlib

class numcodecs.zlib.Zlib(level=1)[source]

Codec providing compression using zlib via the Python standard library.

Parameters
levelint

Compression level.

codec_id = 'zlib'

Codec identifier.

encode(buf)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

GZip

class numcodecs.gzip.GZip(level=1)[source]

Codec providing gzip compression using zlib via the Python standard library.

Parameters
levelint

Compression level.

codec_id = 'gzip'

Codec identifier.

encode(buf)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

BZ2

class numcodecs.bz2.BZ2(level=1)[source]

Codec providing compression using bzip2 via the Python standard library.

Parameters
levelint

Compression level.

codec_id = 'bz2'

Codec identifier.

encode(buf)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

LZMA

class numcodecs.lzma.LZMA(format=1, check=- 1, preset=None, filters=None)[source]

Codec providing compression using lzma via the Python standard library.

Parameters
formatinteger, optional

One of the lzma format codes, e.g., lzma.FORMAT_XZ.

checkinteger, optional

One of the lzma check codes, e.g., lzma.CHECK_NONE.

presetinteger, optional

An integer between 0 and 9 inclusive, specifying the compression level.

filterslist, optional

A list of dictionaries specifying compression filters. If filters are provided, ‘preset’ must be None.

codec_id = 'lzma'

Codec identifier.

encode(buf)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

Delta

class numcodecs.delta.Delta(dtype, astype=None)[source]

Codec to encode data as the difference between adjacent values.

Parameters
dtypedtype

Data type to use for decoded data.

astypedtype, optional

Data type to use for encoded data.

Notes

If astype is an integer data type, please ensure that it is sufficiently large to store encoded values. No checks are made and data may become corrupted due to integer overflow if astype is too small. Note also that the encoded data for each chunk includes the absolute value of the first element in the chunk, and so the encoded data type in general needs to be large enough to store absolute values from the array.

Examples

>>> import numcodecs
>>> import numpy as np
>>> x = np.arange(100, 120, 2, dtype='i8')
>>> codec = numcodecs.Delta(dtype='i8', astype='i1')
>>> y = codec.encode(x)
>>> y
array([100,   2,   2,   2,   2,   2,   2,   2,   2,   2], dtype=int8)
>>> z = codec.decode(y)
>>> z
array([100, 102, 104, 106, 108, 110, 112, 114, 116, 118])
codec_id = 'delta'

Codec identifier.

encode(buf)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()[source]

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

FixedScaleOffset

class numcodecs.fixedscaleoffset.FixedScaleOffset(offset, scale, dtype, astype=None)[source]

Simplified version of the scale-offset filter available in HDF5. Applies the transformation (x - offset) * scale to all chunks. Results are rounded to the nearest integer but are not packed according to the minimum number of bits.

Parameters
offsetfloat

Value to subtract from data.

scaleint

Value to multiply by data.

dtypedtype

Data type to use for decoded data.

astypedtype, optional

Data type to use for encoded data.

Notes

If astype is an integer data type, please ensure that it is sufficiently large to store encoded values. No checks are made and data may become corrupted due to integer overflow if astype is too small.

Examples

>>> import numcodecs
>>> import numpy as np
>>> x = np.linspace(1000, 1001, 10, dtype='f8')
>>> x
array([1000.        , 1000.11111111, 1000.22222222, 1000.33333333,
       1000.44444444, 1000.55555556, 1000.66666667, 1000.77777778,
       1000.88888889, 1001.        ])
>>> codec = numcodecs.FixedScaleOffset(offset=1000, scale=10, dtype='f8', astype='u1')
>>> y1 = codec.encode(x)
>>> y1
array([ 0,  1,  2,  3,  4,  6,  7,  8,  9, 10], dtype=uint8)
>>> z1 = codec.decode(y1)
>>> z1
array([1000. , 1000.1, 1000.2, 1000.3, 1000.4, 1000.6, 1000.7,
       1000.8, 1000.9, 1001. ])
>>> codec = numcodecs.FixedScaleOffset(offset=1000, scale=10**2, dtype='f8', astype='u1')
>>> y2 = codec.encode(x)
>>> y2
array([ 0,  11,  22,  33,  44,  56,  67,  78,  89, 100], dtype=uint8)
>>> z2 = codec.decode(y2)
>>> z2
array([1000.  , 1000.11, 1000.22, 1000.33, 1000.44, 1000.56,
       1000.67, 1000.78, 1000.89, 1001.  ])
>>> codec = numcodecs.FixedScaleOffset(offset=1000, scale=10**3, dtype='f8', astype='u2')
>>> y3 = codec.encode(x)
>>> y3
array([ 0,  111,  222,  333,  444,  556,  667,  778,  889, 1000], dtype=uint16)
>>> z3 = codec.decode(y3)
>>> z3
array([1000.   , 1000.111, 1000.222, 1000.333, 1000.444, 1000.556,
       1000.667, 1000.778, 1000.889, 1001.   ])
codec_id = 'fixedscaleoffset'

Codec identifier.

encode(buf)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()[source]

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

Quantize

class numcodecs.quantize.Quantize(digits, dtype, astype=None)[source]

Lossy filter to reduce the precision of floating point data.

Parameters
digitsint

Desired precision (number of decimal digits).

dtypedtype

Data type to use for decoded data.

astypedtype, optional

Data type to use for encoded data.

Examples

>>> import numcodecs
>>> import numpy as np
>>> x = np.linspace(0, 1, 10, dtype='f8')
>>> x
array([0.        , 0.11111111, 0.22222222, 0.33333333, 0.44444444,
       0.55555556, 0.66666667, 0.77777778, 0.88888889, 1.        ])
>>> codec = numcodecs.Quantize(digits=1, dtype='f8')
>>> codec.encode(x)
array([0.    , 0.125 , 0.25  , 0.3125, 0.4375, 0.5625, 0.6875,
       0.75  , 0.875 , 1.    ])
>>> codec = numcodecs.Quantize(digits=2, dtype='f8')
>>> codec.encode(x)
array([0.       , 0.109375 , 0.21875  , 0.3359375, 0.4453125,
       0.5546875, 0.6640625, 0.78125  , 0.890625 , 1.       ])
>>> codec = numcodecs.Quantize(digits=3, dtype='f8')
>>> codec.encode(x)
array([0.        , 0.11132812, 0.22265625, 0.33300781, 0.44433594,
       0.55566406, 0.66699219, 0.77734375, 0.88867188, 1.        ])
codec_id = 'quantize'

Codec identifier.

encode(buf)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()[source]

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

Bitround

class numcodecs.bitround.BitRound(keepbits: int)[source]

Floating-point bit rounding codec

Drops a specified number of bits from the floating point mantissa, leaving an array more amenable to compression. The number of bits to keep should be determined by an information analysis of the data to be compressed. The approach is based on the paper by Klöwer et al. 2021 (https://www.nature.com/articles/s43588-021-00156-2). See https://github.com/zarr-developers/numcodecs/issues/298 for discussion and the original implementation in Julia referred to at https://github.com/milankl/BitInformation.jl

Parameters
keepbits: int

The number of bits of the mantissa to keep. The range allowed depends on the dtype input data. If keepbits is equal to the maximum allowed for the data type, this is equivalent to no transform.

codec_id = 'bitround'

Codec identifier.

encode(buf)[source]

Create int array by rounding floating-point data

The itemsize will be preserved, but the output should be much more compressible.

decode(buf, out=None)[source]

Remake floats from ints

As with encode, preserves itemsize.

get_config()

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

PackBits

class numcodecs.packbits.PackBits[source]

Codec to pack elements of a boolean array into bits in a uint8 array.

Notes

The first element of the encoded array stores the number of bits that were padded to complete the final byte.

Examples

>>> import numcodecs
>>> import numpy as np
>>> codec = numcodecs.PackBits()
>>> x = np.array([True, False, False, True], dtype=bool)
>>> y = codec.encode(x)
>>> y
array([  4, 144], dtype=uint8)
>>> z = codec.decode(y)
>>> z
array([ True, False, False,  True])
codec_id = 'packbits'

Codec identifier.

encode(buf)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

Categorize

class numcodecs.categorize.Categorize(labels, dtype, astype='u1')[source]

Filter encoding categorical string data as integers.

Parameters
labelssequence of strings

Category labels.

dtypedtype

Data type to use for decoded data.

astypedtype, optional

Data type to use for encoded data.

Examples

>>> import numcodecs
>>> import numpy as np
>>> x = np.array(['male', 'female', 'female', 'male', 'unexpected'], dtype=object)
>>> x
array(['male', 'female', 'female', 'male', 'unexpected'],
      dtype=object)
>>> codec = numcodecs.Categorize(labels=['female', 'male'], dtype=object)
>>> y = codec.encode(x)
>>> y
array([2, 1, 1, 2, 0], dtype=uint8)
>>> z = codec.decode(y)
>>> z
array(['male', 'female', 'female', 'male', ''],
      dtype=object)
codec_id = 'categorize'

Codec identifier.

encode(buf)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()[source]

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

32-bit checksums

CRC32

class numcodecs.checksum32.CRC32[source]
codec_id = 'crc32'

Codec identifier.

encode(buf)

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

Adler32

class numcodecs.checksum32.Adler32[source]
codec_id = 'adler32'

Codec identifier.

encode(buf)

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

AsType

class numcodecs.astype.AsType(encode_dtype, decode_dtype)[source]

Filter to convert data between different types.

Parameters
encode_dtypedtype

Data type to use for encoded data.

decode_dtypedtype, optional

Data type to use for decoded data.

Notes

If encode_dtype is of lower precision than decode_dtype, please be aware that data loss can occur by writing data to disk using this filter. No checks are made to ensure the casting will work in that direction and data corruption will occur.

Examples

>>> import numcodecs
>>> import numpy as np
>>> x = np.arange(100, 120, 2, dtype=np.int8)
>>> x
array([100, 102, 104, 106, 108, 110, 112, 114, 116, 118], dtype=int8)
>>> f = numcodecs.AsType(encode_dtype=x.dtype, decode_dtype=np.int64)
>>> y = f.decode(x)
>>> y
array([100, 102, 104, 106, 108, 110, 112, 114, 116, 118])
>>> z = f.encode(y)
>>> z
array([100, 102, 104, 106, 108, 110, 112, 114, 116, 118], dtype=int8)
codec_id = 'astype'

Codec identifier.

encode(buf)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()[source]

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

JSON

class numcodecs.json.JSON(encoding='utf-8', skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=True, indent=None, separators=None, strict=True)[source]

Codec to encode data as JSON. Useful for encoding an array of Python objects.

Changed in version 0.6: The encoding format has been changed to include the array shape in the encoded data, which ensures that all object arrays can be correctly encoded and decoded.

Examples

>>> import numcodecs
>>> import numpy as np
>>> x = np.array(['foo', 'bar', 'baz'], dtype='object')
>>> codec = numcodecs.JSON()
>>> codec.decode(codec.encode(x))
array(['foo', 'bar', 'baz'], dtype=object)
codec_id = 'json2'

Codec identifier.

encode(buf)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()[source]

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

Pickle

class numcodecs.pickles.Pickle(protocol=4)[source]

Codec to encode data as as pickled bytes. Useful for encoding an array of Python string objects.

Parameters
protocolint, defaults to pickle.HIGHEST_PROTOCOL

The protocol used to pickle data.

Examples

>>> import numcodecs as codecs
>>> import numpy as np
>>> x = np.array(['foo', 'bar', 'baz'], dtype='object')
>>> f = codecs.Pickle()
>>> f.decode(f.encode(x))
array(['foo', 'bar', 'baz'], dtype=object)
codec_id = 'pickle'

Codec identifier.

encode(buf)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()[source]

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

MsgPack

class numcodecs.msgpacks.MsgPack(use_single_float=False, use_bin_type=True, raw=False)[source]

Codec to encode data as msgpacked bytes. Useful for encoding an array of Python objects.

Changed in version 0.6: The encoding format has been changed to include the array shape in the encoded data, which ensures that all object arrays can be correctly encoded and decoded.

Parameters
use_single_floatbool, optional

Use single precision float type for float.

use_bin_typebool, optional

Use bin type introduced in msgpack spec 2.0 for bytes. It also enables str8 type for unicode.

rawbool, optional

If true, unpack msgpack raw to Python bytes. Otherwise, unpack to Python str by decoding with UTF-8 encoding.

Notes

Requires msgpack to be installed.

Examples

>>> import numcodecs
>>> import numpy as np
>>> x = np.array(['foo', 'bar', 'baz'], dtype='object')
>>> codec = numcodecs.MsgPack()
>>> codec.decode(codec.encode(x))
array(['foo', 'bar', 'baz'], dtype=object)
codec_id = 'msgpack2'

Codec identifier.

encode(buf)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

get_config()[source]

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

Codecs for variable-length objects

VLenUTF8

class numcodecs.vlen.VLenUTF8

Encode variable-length unicode string objects via UTF-8.

Notes

The encoded bytes values for each string are packed into a parquet-style byte array.

Examples

>>> import numcodecs
>>> import numpy as np
>>> x = np.array(['foo', 'bar', 'baz'], dtype='object')
>>> codec = numcodecs.VLenUTF8()
>>> codec.decode(codec.encode(x))
array(['foo', 'bar', 'baz'], dtype=object)
codec_id = 'vlen-utf8'

Codec identifier.

encode(self, buf)
decode(self, buf, out=None)
get_config()

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

VLenBytes

class numcodecs.vlen.VLenBytes

Encode variable-length byte string objects.

Notes

The bytes values for each string are packed into a parquet-style byte array.

Examples

>>> import numcodecs
>>> import numpy as np
>>> x = np.array([b'foo', b'bar', b'baz'], dtype='object')
>>> codec = numcodecs.VLenBytes()
>>> codec.decode(codec.encode(x))
array([b'foo', b'bar', b'baz'], dtype=object)
codec_id = 'vlen-bytes'

Codec identifier.

encode(self, buf)
decode(self, buf, out=None)
get_config()

Return a dictionary holding configuration parameters for this codec. Must include an ‘id’ field with the codec identifier. All values must be compatible with JSON encoding.

classmethod from_config(config)

Instantiate codec from a configuration object.

VLenArray

class numcodecs.vlen.VLenArray(dtype)

Encode variable-length 1-dimensional arrays via UTF-8.

Notes

The binary data for each array are packed into a parquet-style byte array.

Examples

>>> import numcodecs
>>> import numpy as np
>>> x = np.array([[1, 3, 5], [4], [7, 9]], dtype='object')
>>> codec = numcodecs.VLenArray('<i4')
>>> codec.decode(codec.encode(x))
array([array([1, 3, 5], dtype=int32), array([4], dtype=int32),
       array([7, 9], dtype=int32)], dtype=object)
codec_id = 'vlen-array'

Codec identifier.

encode(self, buf)
decode(self, buf, out=None)
get_config(self)
classmethod from_config(config)

Instantiate codec from a configuration object.

Shuffle

class numcodecs.shuffle.Shuffle(elementsize=4)[source]

Codec providing shuffle

Parameters
elementsizeint

Size in bytes of the array elements. Default = 4

codec_id = 'shuffle'

Codec identifier.

encode(buf, out=None)[source]

Encode data in buf.

Parameters
bufbuffer-like

Data to be encoded. May be any object supporting the new-style buffer protocol.

Returns
encbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

decode(buf, out=None)[source]

Decode data in buf.

Parameters
bufbuffer-like

Encoded data. May be any object supporting the new-style buffer protocol.

outbuffer-like, optional

Writeable buffer to store decoded data. N.B. if provided, this buffer must be exactly the right size to store the decoded data.

Returns
decbuffer-like

Decoded data. May be any object supporting the new-style buffer protocol.

Release notes

0.10.0

Enhancements
Bug fixes
Maintenance

0.9.1

0.9.0

  • c-blosc upgrade 1.18.1 -> 1.21.0. Warning: this temporarily removes support for snappy compression! By kindjacket, #283.

  • Fix an ImportError with Blosc on Android. By Daniel Jewell, #284.

0.8.1

0.8.0

0.7.3

  • Add support for Python 3.9 and Update GitHub Actions. By Jackson Maxfield Brown, #270.

  • Remove support for Python 3.5 which is end of life. While the code base might still be compatible; the source dist and wheel are marked as Python 3.6+ and pip will not install them. Continuous integration on Python 3.5 has been disabled. By Matthias Bussonnier, #266 and #267.

0.7.2

0.7.1

0.7.0

0.6.4

0.6.3

0.6.2

0.6.1

0.6.0

  • The encoding format used by the JSON and MsgPack codecs has been changed to resolve an issue with correctly encoding and decoding some object arrays. Now the encoded data includes the original shape of the array, which enables the correct shape to be restored on decoding. The previous encoding format is still supported, so that any data encoded using a previous version of numcodecs can still be read. Thus no changes to user code and applications should be required, other than upgrading numcodecs. By Jerome Kelleher; #74, #75.

  • Updated the msgpack dependency (by Jerome Kelleher; #74, #75).

  • Added support for ppc64le architecture by updating cpuinfo.py from upstream (by Anand S; #82).

  • Allow numcodecs.blosc.Blosc compressor to run on systems where locks are not present (by Marcus Kinsella, #83; and Tom White, #93).

  • Drop Python 3.4 (by John Kirkham; #89).

  • Add Python 3.7 (by John Kirkham; #92).

  • Add codec numcodecs.gzip.GZip to replace gzip alias for zlib, which was incorrect (by Jan Funke; #87; and John Kirkham, #134).

  • Corrects handling of NaT in datetime64 and timedelta64 in various compressors (by John Kirkham; #127, #131).

  • Improvements to the compatibility layer used for normalising inputs to encode and decode methods in most codecs. This removes unnecessary memory copies for some codecs, and also simplifies the implementation of some codecs, improving code readability and maintainability. By John Kirkham and Alistair Miles; #119, #121, #128.

  • Return values from encode() and decode() methods are now returned as numpy arrays for consistency across codecs. By John Kirkham, #136.

  • Improvements to handling of errors in the numcodecs.blosc.Blosc and numcodecs.lz4.LZ4 codecs when the maximum allowed size of an input buffer is exceeded. By Jerome Kelleher, #80, #81.

0.5.5

  • The bundled c-blosc sources have been upgraded to version 1.14.3 (#72).

0.5.4

  • The bundled c-blosc sources have been upgraded to version 1.14.0 (#71).

0.5.3

  • The test suite has been migrated to use pytest instead of nosetests (#61, #62).

  • The bundled c-blosc library has been updated to version 1.13.4 (#63, #64).

0.5.2

  • Add support for encoding None values in VLen… codecs (#59).

0.5.1

  • Fixed a compatibility issue with the Zlib codec to ensure it can handle bytearray objects under Python 2.7 (#57).

  • Restricted the numcodecs.categorize.Categorize codec to object (‘O’) and unicode (‘U’) dtypes and disallowed bytes (‘S’) dtypes because these do not round-trip through JSON configuration.

0.5.0

0.4.1

  • Resolved an issue where providing an array with dtype object as the destination when decoding could cause segaults with some codecs (#55).

0.4.0

0.3.1

  • Revert the default shuffle argument to SHUFFLE (byte shuffle) for the numcodecs.blosc.Blosc codec for compatibility and consistency with previous code.

0.3.0

  • The numcodecs.blosc.Blosc codec has been made robust for usage in both multithreading and multiprocessing programs, regardless of whether Blosc has been configured to use multiple threads internally or not (#41, #42).

  • The numcodecs.blosc.Blosc codec now supports an AUTOSHUFFLE argument when encoding (compressing) which activates bit- or byte-shuffle depending on the itemsize of the incoming buffer (#37, #42). This is also now the default.

  • The numcodecs.blosc.Blosc codec now raises an exception when an invalid compressor name is provided under all circumstances (#40, #42).

  • The bundled version of the c-blosc library has been upgraded to version 1.12.1 (#45, #42).

  • An improvement has been made to the system detection capabilities during compilation of C extensions (by Prakhar Goel; #36, #38).

  • Arrays with datetime64 or timedelta64 can now be passed directly to compressor codecs (#39, #46).

0.2.1

The bundled c-blosc library has been upgraded to version 1.11.3 (#34, #35).

0.2.0

New codecs:

Other changes:

Maintenance work:

  • A data fixture has been added to the test suite to add some protection against changes to codecs that break backwards-compatibility with data encoded using a previous release of numcodecs (#30, #33).

0.1.1

This release includes a small modification to the setup.py script to provide greater control over how compiler options for different instruction sets are configured (#24, #27).

0.1.0

New codecs:

Other new features:

  • The numcodecs.lzma.LZMA codec is now supported on Python 2.7 if backports.lzma is installed (John Kirkham; #11, #13).

  • The bundled c-blosc library has been upgraded to version 1.11.2 (#10, #18).

  • An option has been added to the numcodecs.blosc.Blosc codec to allow the block size to be manually configured (#9, #19).

  • The representation string for the numcodecs.blosc.Blosc codec has been tweaked to help with understanding the shuffle option (#4, #19).

  • Options have been added to manually control how the C extensions are built regardless of the architecture of the system on which the build is run. To disable support for AVX2 set the environment variable “DISABLE_NUMCODECS_AVX2”. To disable support for SSE2 set the environment variable “DISABLE_NUMCODECS_SSE2”. To disable C extensions altogether set the environment variable “DISABLE_NUMCODECS_CEXT” (#24, #26).

Maintenance work:

  • CI tests now run under Python 3.6 as well as 2.7, 3.4, 3.5 (#16, #17).

  • Test coverage is now monitored via coveralls (#15, #20).

0.0.1

Fixed project description in setup.py.

0.0.0

First release. This version is a port of the codecs module from Zarr 2.1.0. The following changes have been made from the original Zarr module:

  • Codec classes have been re-organized into separate modules, mostly one per codec class, for ease of maintenance.

  • Two new codec classes have been added based on 32-bit checksums: numcodecs.checksum32.CRC32 and numcodecs.checksum32.Adler32.

  • The Blosc extension has been refactored to remove code duplications related to handling of buffer compatibility.

Contributing to NumCodecs

NumCodecs is a community maintained project. We welcome contributions in the form of bug reports, bug fixes, documentation, enhancement proposals and more. This page provides information on how best to contribute.

Asking for help

If you have a question about how to use NumCodecs, please post your question on StackOverflow using the “numcodecs” tag. If you don’t get a response within a day or two, feel free to raise a GitHub issue including a link to your StackOverflow question. We will try to respond to questions as quickly as possible, but please bear in mind that there may be periods where we have limited time to answer questions due to other commitments.

Bug reports

If you find a bug, please raise a GitHub issue. Please include the following items in a bug report:

  1. A minimal, self-contained snippet of Python code reproducing the problem. You can format the code nicely using markdown, e.g.:

    ```python
    >>> import numcodecs
    >>> codec = numcodecs.Zlib(1)
    ...
    ```
    
  2. Information about the version of NumCodecs, along with versions of dependencies and the Python interpreter, and installation information. The version of NumCodecs can be obtained from the numcodecs.__version__ property. Please also state how NumCodecs was installed, e.g., “installed via pip into a virtual environment”, or “installed using conda”. Information about other packages installed can be obtained by executing pip list (if using pip to install packages) or conda list (if using conda to install packages) from the operating system command prompt. The version of the Python interpreter can be obtained by running a Python interactive session, e.g.:

    $ python
    Python 3.7.6 (default, Jan  8 2020, 13:42:34)
    [Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin
    
  3. An explanation of why the current behaviour is wrong/not desired, and what you expect instead.

Enhancement proposals

If you have an idea about a new feature or some other improvement to NumCodecs, please raise a GitHub issue first to discuss.

We very much welcome ideas and suggestions for how to improve NumCodecs, but please bear in mind that we are likely to be conservative in accepting proposals for new features. The reasons for this are that we would like to keep the NumCodecs code base lean and focused on a core set of functionalities, and available time for development, review and maintenance of new features is limited. But if you have a great idea, please don’t let that stop you posting it on GitHub, just please don’t be offended if we respond cautiously.

Contributing code and/or documentation

Forking the repository

The NumCodecs source code is hosted on GitHub at the following location:

You will need your own fork to work on the code. Go to the link above and hit the “Fork” button. Then clone your fork to your local machine:

$ git clone git@github.com:your-user-name/numcodecs.git
$ cd numcodecs
$ git remote add upstream git@github.com:zarr-developers/numcodecs.git
Creating a development environment

To work with the NumCodecs source code, it is recommended to set up a Python virtual environment and install all NumCodecs dependencies using the same versions as are used by the core developers and continuous integration services. Assuming you have a Python 3 interpreter already installed, and have also installed the virtualenv package, and you have cloned the NumCodecs source code and your current working directory is the root of the repository, you can do something like the following:

$ mkdir -p ~/pyenv/numcodecs-dev
$ virtualenv --no-site-packages --python=/usr/bin/python3.9 ~/pyenv/numcodecs-dev
$ source ~/pyenv/numcodecs-dev/bin/activate
$ pip install -r requirements_dev.txt
$ python setup.py build_ext --inplace

To verify that your development environment is working, you can run the unit tests:

$ pytest -v numcodecs
Creating a branch

Before you do any new work or submit a pull request, please open an issue on GitHub to report the bug or propose the feature you’d like to add.

It’s best to create a new, separate branch for each piece of work you want to do. E.g.:

git fetch upstream
git checkout -b shiny-new-feature upstream/main

This changes your working directory to the ‘shiny-new-feature’ branch. Keep any changes in this branch specific to one bug or feature so it is clear what the branch brings to NumCodecs.

To update this branch with latest code from NumCodecs, you can retrieve the changes from the main branch and perform a rebase:

git fetch upstream
git rebase upstream/main

This will replay your commits on top of the latest NumCodecs git main. If this leads to merge conflicts, these need to be resolved before submitting a pull request. Alternatively, you can merge the changes in from upstream/main instead of rebasing, which can be simpler:

git fetch upstream
git merge upstream/main

Again, any conflicts need to be resolved before submitting a pull request.

Running the test suite

NumCodecs includes a suite of unit tests, as well as doctests included in function and class docstrings. The simplest way to run the unit tests is to invoke:

$ pytest -v numcodecs

To also run the doctests within docstrings, run:

$ pytest -v --doctest-modules numcodecs

Tests can be run under different Python versions using tox. E.g. (assuming you have the corresponding Python interpreters installed on your system):

$ tox -e py36,py37,py38,py39

NumCodecs currently supports Python 6-3.9, so the above command must succeed before code can be accepted into the main code base. Note that only the py39 tox environment runs the doctests, i.e., doctests only need to succeed under Python 3.9.

All tests are automatically run via Travis (Linux) and AppVeyor (Windows) continuous integration services for every pull request. Tests must pass under both services before code can be accepted.

Code standards

All code must conform to the PEP8 standard. Regarding line length, lines up to 100 characters are allowed, although please try to keep under 90 wherever possible. Conformance can be checked by running:

$ flake8 --max-line-length=100 numcodecs

This is automatically run when invoking tox -e py39.

Test coverage

NumCodecs maintains 100% test coverage under the latest Python stable release (currently Python 3.9). Both unit tests and docstring doctests are included when computing coverage. Running tox -e py39 will automatically run the test suite with coverage and produce a coverage report. This should be 100% before code can be accepted into the main code base.

When submitting a pull request, coverage will also be collected across all supported Python versions via the Coveralls service, and will be reported back within the pull request. Coveralls coverage must also be 100% before code can be accepted.

Documentation

Docstrings for user-facing classes and functions should follow the numpydoc standard, including sections for Parameters and Examples. All examples will be run as doctests under Python 3.9.

NumCodecs uses Sphinx for documentation, hosted on readthedocs.org. Documentation is written in the RestructuredText markup language (.rst files) in the docs folder. The documentation consists both of prose and API documentation. All user-facing classes and functions should be included in the API documentation. Any changes should also be included in the release notes (docs/release.rst).

The documentation can be built by running:

$ tox -e docs

The resulting built documentation will be available in the .tox/docs/tmp/html folder.

Development best practices, policies and procedures

The following information is mainly for core developers, but may also be of interest to contributors.

Merging pull requests

Pull requests submitted by an external contributor should be reviewed and approved by at least one core developers before being merged. Ideally, pull requests submitted by a core developer should be reviewed and approved by at least one other core developers before being merged.

Pull requests should not be merged until all CI checks have passed (Travis, AppVeyor, Coveralls) against code that has had the latest main merged in.

Compatibility and versioning policies

Because NumCodecs is a data encoding/decoding library, there are two types of compatibility to consider: API compatibility and data format compatibility.

API compatibility

All functions, classes and methods that are included in the API documentation (files under docs/api/*.rst) are considered as part of the NumCodecs public API, except if they have been documented as an experimental feature, in which case they are part of the experimental API.

Any change to the public API that does not break existing third party code importing NumCodecs, or cause third party code to behave in a different way, is a backwards-compatible API change. For example, adding a new function, class or method is usually a backwards-compatible change. However, removing a function, class or method; removing an argument to a function or method; adding a required argument to a function or method; or changing the behaviour of a function or method, are examples of backwards-incompatible API changes.

If a release contains no changes to the public API (e.g., contains only bug fixes or other maintenance work), then the micro version number should be incremented (e.g., 2.2.0 -> 2.2.1). If a release contains public API changes, but all changes are backwards-compatible, then the minor version number should be incremented (e.g., 2.2.1 -> 2.3.0). If a release contains any backwards-incompatible public API changes, the major version number should be incremented (e.g., 2.3.0 -> 3.0.0).

Backwards-incompatible changes to the experimental API can be included in a minor release, although this should be minimised if possible. I.e., it would be preferable to save up backwards-incompatible changes to the experimental API to be included in a major release, and to stabilise those features at the same time (i.e., move from experimental to public API), rather than frequently tinkering with the experimental API in minor releases.

Data format compatibility

Each codec class in NumCodecs exposes a codec_id attribute, which is an identifier for the format of the encoded data produced by that codec. Thus it is valid for two or more codec classes to expose the same value for the codec_id attribute if the format of the encoded data is identical. The codec_id is intended to provide a basis for achieving and managing interoperability between versions of the NumCodecs package, as well as between NumCodecs and other software libraries that aim to provide compatible codec implementations. Currently there is no formal specification of the encoded data format corresponding to each codec_id, so the codec classes provided in the NumCodecs package should be taken as the reference implementation for a given codec_id.

There must be a one-to-one mapping from codec_id values to encoded data formats, and that mapping must not change once the first implementation of a codec_id has been published within a NumCodecs release. If a change is proposed to the encoded data format for a particular type of codec, then this must be implemented in NumCodecs via a new codec class exposing a new codec_id value.

Note that the NumCodecs test suite includes a data fixture and tests to try and ensure that data format compatibility is not accidentally broken. See the test_backwards_compatibility() functions in test modules for each codec for examples.

When to make a release

Ideally, any bug fixes that don’t change the public API should be released as soon as possible. It is fine for a micro release to contain only a single bug fix.

When to make a minor release is at the discretion of the core developers. There are no hard-and-fast rules, e.g., it is fine to make a minor release to make a single new feature available; equally, it is fine to make a minor release that includes a number of changes.

Major releases obviously need to be given careful consideration, and should be done as infrequently as possible, as they will break existing code and/or affect data compatibility in some way.

Release procedure

Checkout and update the main branch:

$ git checkout main
$ git pull

Verify all tests pass on all supported Python versions, and docs build:

$ tox

Tag the version (where “X.X.X” stands for the version number, e.g., “2.2.0”):

$ version=X.X.X
$ git tag -a v$version -m v$version
$ git push --tags

This will trigger a GitHub Action which will build the source distribution as well as wheels for all major platforms.

Acknowledgments

The following people have contributed to the development of NumCodecs by contributing code, documentation, code reviews, comments and/or ideas:

Numcodecs bundles the c-blosc library.

Development of this package is supported by the MRC Centre for Genomics and Global Health.

Indices and tables