pandera.api.pyspark.model.DataFrameModelΒΆ

class pandera.api.pyspark.model.DataFrameModel(*args, **kwargs)[source]ΒΆ

Definition of a DataFrameSchema.

new in 0.16.0

See the User Guide for more.

Check if all columns in a dataframe have a column in the Schema.

Parameters:
  • check_obj – DataFrame object i.e. the dataframe to be validated.

  • head – Not used since spark has no concept of head or tail

  • tail – Not used since spark has no concept of head or tail

  • sample – validate a random sample of n% rows. Value ranges from 0-1, for example 10% rows can be sampled using setting value as 0.1. refer below documentation. https://spark.apache.org/docs/3.1.2/api/python/reference/api/pyspark.sql.DataFrame.sample.html

  • random_state – random seed for the sample argument.

  • lazy – if True, lazily evaluates dataframe against all validation checks and raises a SchemaErrors. Otherwise, raise SchemaError as soon as one occurs.

  • inplace – if True, applies coercion to the object of validation, otherwise creates a copy of the data.

Returns:

validated DataFrame

Raises:

SchemaError – when DataFrame violates built-in or custom checks.

Example:

Calling schema.validate returns the dataframe.

>>> import pandera.pyspark as psa
>>> from pyspark.sql import SparkSession
>>> import pyspark.sql.types as T
>>> spark = SparkSession.builder.getOrCreate()
>>>
>>> data = [("Bread", 9), ("Butter", 15)]
>>> spark_schema = T.StructType(
...         [
...             T.StructField("product", T.StringType(), False),
...             T.StructField("price", T.IntegerType(), False),
...         ],
...     )
>>> df = spark.createDataFrame(data=data, schema=spark_schema)
>>>
>>> schema_withchecks = psa.DataFrameSchema(
...         columns={
...             "product": psa.Column("str", checks=psa.Check.str_startswith("B")),
...             "price": psa.Column("int", checks=psa.Check.gt(5)),
...         },
...         name="product_schema",
...         description="schema for product info",
...         title="ProductSchema",
...     )
>>>
>>> schema_withchecks.validate(df).take(2)
    [Row(product='Bread', price=9), Row(product='Butter', price=15)]

Methods

get_metadata

Provide metadata for columns and schema level

pydantic_validate

Verify that the input is a compatible dataframe model.

to_ddl

Recover fields of DataFrameModel as a Pyspark DDL string.

to_schema

Create DataFrameSchema from the DataFrameModel.

to_structtype

Recover fields of DataFrameModel as a Pyspark StructType object.

to_yaml

Convert Schema to yaml using io.to_yaml.

validate

Check if all columns in a dataframe have a column in the Schema.