This product contains a subset of the Parquet files published in https://github.com/apache/parquet-testing. It includes both correct (data) and bad (bad_data) Parquet files. The data/geospatial subdirectory contains test files for the new GEOMETRY logical type.
TODO: Document what each file is in the table above.
Tests files with .parquet.encrypted suffix are encrypted using Parquet Modular Encryption.
A detailed description of the Parquet Modular Encryption specification can be found here:
https://github.com/apache/parquet-format/blob/encryption/Encryption.md
Following are the keys and key ids (when using key_retriever) used to encrypt the encrypted columns and footer in all the encrypted files:
The following files are encrypted with AAD prefix "tester":
A sample that reads and checks these files can be found at the following tests in Parquet C++:
The external_key_material_java.parquet.encrypted file was encrypted using parquet-mr with
external key material enabled, so the key material is found in the
_KEY_MATERIAL_FOR_external_key_material_java.parquet.encrypted.json file.
This data was written using the org.apache.parquet.crypto.keytools.mocks.InMemoryKMS KMS client,
which is compatible with the TestOnlyInServerWrapKms KMS client used in C++ tests.
The schema for the datapage_v1-*-checksum.parquet test files is:
The detailed structure for these files is as follows:
data/datapage_v1-uncompressed-checksum.parquet:
data/datapage_v1-snappy-compressed-checksum.parquet:
data/datapage_v1-corrupt-checksum.parquet:
The schema for the *-dict-*-checksum.parquet test files is:
data/rle-dict-snappy-checksum.parquet:
data/plain-dict-uncompressed-checksum.parquet:
data/rle-dict-uncompressed-corrupt-checksum.parquet:
Bloom filter examples have been generated by parquet-mr. They are not Parquet files but only contain the bloom filter header and payload.
For each of bloom_filter.bin and bloom_filter.xxhash.bin, the bloom filter
was generated by inserting the strings "hello", "parquet", "bloom", "filter".
bloom_filter.bin uses the original Murmur3-based bloom filter format as of
https://github.com/apache/parquet-format/commit/54839ad5e04314c944fed8aa4bc6cf15e4a58698.
bloom_filter.xxhash.bin uses the newer xxHash-based bloom filter format as of
https://github.com/apache/parquet-format/commit/3fb10e00c2204bf1c6cc91e094c59e84cefcee33.
Prior to version 1.4.0, the C++ Parquet writer would write NaN values in min and max statistics. (Correction in this issue). It has been updated since to ignore NaN values when calculating statistics, but for backwards compatibility the following rules were established (in PARQUET-1222):
For backwards compatibility when reading files:
- If the min is a NaN, it should be ignored.
- If the max is a NaN, it should be ignored.
- If the min is +0, the row group may contain -0 values as well.
- If the max is -0, the row group may contain +0 values as well.
- When looking for NaN values, min and max should be ignored.
The file nan_in_stats.parquet was generated with:
The file large_string_map.brotli.parquet was generated with:
It is meant to exercise reading of structured data where each value is smaller than 2GB but the combined uncompressed column chunk size is greater than 2GB.
The files float16_zeros_and_nans.parquet and float16_nonzeros_and_nans.parquet
are meant to exercise a variety of test cases regarding Float16 columns (which
are represented as 2-byte FixedLenByteArrays), including:
min is always -0 and max is always +0)The aforementioned files were generated with:
byte_stream_split.zstd.parquet is generated by pyarrow 14.0.2 using the following code:
This is a practical case where BYTE_STREAM_SPLIT encoding obtains a smaller file size than PLAIN or dictionary.
Since the distributions are random normals centered at 0, each byte has nontrivial behavior.
byte_stream_split_extended.gzip.parquet is generated by pyarrow 16.0.0.
It contains 7 pairs of columns, each in two variants containing the same
values: one PLAIN-encoded and one BYTE_STREAM_SPLIT-encoded:
To check conformance of a BYTE_STREAM_SPLIT decoder, read each
BYTE_STREAM_SPLIT-encoded column and compare the decoded values against
the values from the corresponding PLAIN-encoded column. The values should
be equal.
A number of producers, such as Presto/Trino/Athena, have been creating files with schemas where the Map key fields are marked as optional rather than required. This is not spec-compliant, yet appears in a number of existing data files in the wild.
This issue has been fixed in:
We can recreate these problematic files for testing arrow-rs #5630 with relevant Presto/Trino CLI, or with AWS Athena Console:
The schema in the created file is:
For the file: binary_truncated_min_max.parquet
The file contains six columns written with parquet-rs 55.1.0 with statistics_truncate_length=2.
The contents are the following:
Columns utf8_full_truncation and binary_full_truncation are truncating the min/max values and is_{min/max}_value_exact are false.
Columns utf8_partial_truncation and binary_partial_truncation are truncating min value but can't truncate the maximum value. is_min_value_exact is false but is_max_value_exact is true.
Columns utf8_no_truncation and binary_no_truncation contain min and max value that fit on min/max. Both is_{min/max}_value_exact are true.
Some info:
and