pandera.api.pyspark.model.DataFrameModelΒΆ
- class pandera.api.pyspark.model.DataFrameModel(*args, **kwargs)[source]ΒΆ
Definition of a
DataFrameSchema
.new in 0.16.0
See the User Guide for more.
Check if all columns in a dataframe have a column in the Schema.
- Parameters:
check_obj β DataFrame object i.e. the dataframe to be validated.
head β Not used since spark has no concept of head or tail
tail β Not used since spark has no concept of head or tail
sample β validate a random sample of n% rows. Value ranges from 0-1, for example 10% rows can be sampled using setting value as 0.1. refer below documentation. https://spark.apache.org/docs/3.1.2/api/python/reference/api/pyspark.sql.DataFrame.sample.html
random_state β random seed for the
sample
argument.lazy β if True, lazily evaluates dataframe against all validation checks and raises a
SchemaErrors
. Otherwise, raiseSchemaError
as soon as one occurs.inplace β if True, applies coercion to the object of validation, otherwise creates a copy of the data.
- Returns:
validated
DataFrame
- Raises:
SchemaError β when
DataFrame
violates built-in or custom checks.- Example:
Calling
schema.validate
returns the dataframe.>>> import pandera.pyspark as psa >>> from pyspark.sql import SparkSession >>> import pyspark.sql.types as T >>> spark = SparkSession.builder.getOrCreate() >>> >>> data = [("Bread", 9), ("Butter", 15)] >>> spark_schema = T.StructType( ... [ ... T.StructField("product", T.StringType(), False), ... T.StructField("price", T.IntegerType(), False), ... ], ... ) >>> df = spark.createDataFrame(data=data, schema=spark_schema) >>> >>> schema_withchecks = psa.DataFrameSchema( ... columns={ ... "product": psa.Column("str", checks=psa.Check.str_startswith("B")), ... "price": psa.Column("int", checks=psa.Check.gt(5)), ... }, ... name="product_schema", ... description="schema for product info", ... title="ProductSchema", ... ) >>> >>> schema_withchecks.validate(df).take(2) [Row(product='Bread', price=9), Row(product='Butter', price=15)]
Methods
Provide metadata for columns and schema level
Verify that the input is a compatible dataframe model.
Recover fields of DataFrameModel as a Pyspark DDL string.
Create
DataFrameSchema
from theDataFrameModel
.Recover fields of DataFrameModel as a Pyspark StructType object.
Convert Schema to yaml using io.to_yaml.
Check if all columns in a dataframe have a column in the Schema.