Getting Started#
Arrow manages data in arrays (pyarrow.Array
), which can be
grouped in tables (pyarrow.Table
) to represent columns of data
in tabular data.
Arrow also provides support for various formats to get those tabular data in and out of disk and networks. Most commonly used formats are Parquet (Reading and Writing the Apache Parquet Format) and the IPC format (Streaming, Serialization, and IPC).
Creating Arrays and Tables#
Arrays in Arrow are collections of data of uniform type. That allows Arrow to use the best performing implementation to store the data and perform computations on it. So each array is meant to have data and a type
In [1]: import pyarrow as pa
In [2]: days = pa.array([1, 12, 17, 23, 28], type=pa.int8())
Multiple arrays can be combined in tables to form the columns in tabular data when attached to a column name
In [3]: months = pa.array([1, 3, 5, 7, 1], type=pa.int8())
In [4]: years = pa.array([1990, 2000, 1995, 2000, 1995], type=pa.int16())
In [5]: birthdays_table = pa.table([days, months, years],
...: names=["days", "months", "years"])
...:
In [6]: birthdays_table
Out[6]:
pyarrow.Table
days: int8
months: int8
years: int16
----
days: [[1,12,17,23,28]]
months: [[1,3,5,7,1]]
years: [[1990,2000,1995,2000,1995]]
See Data Types and In-Memory Data Model for more details.
Saving and Loading Tables#
Once you have tabular data, Arrow provides out of the box the features to save and restore that data for common formats like Parquet:
In [7]: import pyarrow.parquet as pq
In [8]: pq.write_table(birthdays_table, 'birthdays.parquet')
Once you have your data on disk, loading it back is a single function call, and Arrow is heavily optimized for memory and speed so loading data will be as quick as possible
In [9]: reloaded_birthdays = pq.read_table('birthdays.parquet')
In [10]: reloaded_birthdays
Out[10]:
pyarrow.Table
days: int8
months: int8
years: int16
----
days: [[1,12,17,23,28]]
months: [[1,3,5,7,1]]
years: [[1990,2000,1995,2000,1995]]
Saving and loading back data in arrow is usually done through Parquet, IPC format (Feather File Format), CSV or Line-Delimited JSON formats.
Performing Computations#
Arrow ships with a bunch of compute functions that can be applied to its arrays and tables, so through the compute functions it’s possible to apply transformations to the data
In [11]: import pyarrow.compute as pc
In [12]: pc.value_counts(birthdays_table["years"])
Out[12]:
<pyarrow.lib.StructArray object at 0x7f1563488fa0>
-- is_valid: all not null
-- child 0 type: int16
[
1990,
2000,
1995
]
-- child 1 type: int64
[
1,
2,
2
]
See Compute Functions for a list of available compute functions and how to use them.
Working with large data#
Arrow also provides the pyarrow.dataset
API to work with
large data, which will handle for you partitioning of your data in
smaller chunks
In [13]: import pyarrow.dataset as ds
In [14]: ds.write_dataset(birthdays_table, "savedir", format="parquet",
....: partitioning=ds.partitioning(
....: pa.schema([birthdays_table.schema.field("years")])
....: ))
....:
Loading back the partitioned dataset will detect the chunks
In [15]: birthdays_dataset = ds.dataset("savedir", format="parquet", partitioning=["years"])
In [16]: birthdays_dataset.files
Out[16]:
['savedir/1990/part-0.parquet',
'savedir/1995/part-0.parquet',
'savedir/2000/part-0.parquet']
and will lazily load chunks of data only when iterating over them
In [17]: import datetime
In [18]: current_year = datetime.datetime.utcnow().year
In [19]: for table_chunk in birthdays_dataset.to_batches():
....: print("AGES", pc.subtract(current_year, table_chunk["years"]))
....:
AGES [
34
]
AGES [
29,
29
]
AGES [
24,
24
]
For further details on how to work with big datasets, how to filter them, how to project them, etc., refer to Tabular Datasets documentation.
Continuing from here#
For digging further into Arrow, you might want to read the PyArrow Documentation itself or the Arrow Python Cookbook