public class PigStorage extends FileInputLoadFunc implements StoreFuncInterface, LoadPushDown, LoadMetadata, StoreMetadata, OverwritableStoreFunc
An optional second constructor argument is provided that allows one to customize advanced behaviors. A list of available options is below:
-schema Reads/Stores the schema of the relation using a
hidden JSON file.
-noschema Ignores a stored schema during loading.
-tagFile Appends input source file name to beginning of each tuple.
-tagPath Appends input source file path to beginning of each tuple.
-schema is specified, a hidden ".pig_schema" file is created in the output directory
when storing data. It is used by PigStorage (with or without -schema) during loading to determine the
field names and types of the data without the need for a user to explicitly provide the schema in an
as clause, unless -noschema is specified. No attempt to merge conflicting
schemas is made during loading. The first schema encountered during a file system scan is used.
If the schema file is not present while '-schema' option is used during loading,
it results in an error.
In addition, using -schema drops a ".pig_headers" file in the output directory.
This file simply lists the delimited aliases. This is intended to make export to tools that can read
files with header lines easier (just cat the header to your data).
-tagFile is specified, PigStorage will prepend input split name to each Tuple/row.
Usage: A = LOAD 'input' using PigStorage(',','-tagFile'); B = foreach A generate $0;
The first field (0th index) in each Tuple will contain input file name.
If-tagPath is specified, PigStorage will prepend input split path to each Tuple/row.
Usage: A = LOAD 'input' using PigStorage(',','-tagPath'); B = foreach A generate $0;
The first field (0th index) in each Tuple will contain input file path
Note that regardless of whether or not you store the schema, you always need to specify the correct delimiter to read your data. If you store reading delimiter "#" and then load using the default delimiter, your data will not be parsed correctly.
output.compression.enabled and output.compression.codec job properties
also work.
Loading from directories ending in .bz2 or .bz works automatically; other compression formats are not auto-detected on loading.
LoadPushDown.OperatorSet, LoadPushDown.RequiredField, LoadPushDown.RequiredFieldList, LoadPushDown.RequiredFieldResponse| Modifier and Type | Field and Description |
|---|---|
protected LoadCaster |
caster |
protected org.apache.hadoop.mapreduce.RecordReader |
in |
protected org.apache.commons.logging.Log |
mLog |
protected boolean[] |
mRequiredColumns |
protected ResourceSchema |
schema |
protected String |
signature |
protected org.apache.hadoop.mapreduce.RecordWriter |
writer |
| Constructor and Description |
|---|
PigStorage() |
PigStorage(String delimiter)
Constructs a Pig loader that uses specified character as a field delimiter.
|
PigStorage(String delimiter,
String options)
Constructs a Pig loader that uses specified character as a field delimiter.
|
| Modifier and Type | Method and Description |
|---|---|
void |
checkSchema(ResourceSchema s)
Set the schema for data to be stored.
|
void |
cleanupOnFailure(String location,
org.apache.hadoop.mapreduce.Job job)
This method will be called by Pig if the job which contains this store
fails.
|
void |
cleanupOnSuccess(String location,
org.apache.hadoop.mapreduce.Job job)
This method will be called by Pig if the job which contains this store
is successful, and some cleanup of intermediate resources is required.
|
void |
cleanupOutput(POStore store,
org.apache.hadoop.mapreduce.Job job)
This method is called to cleanup the store/output location of this
StoreFunc. |
boolean |
equals(Object obj) |
boolean |
equals(PigStorage other) |
List<LoadPushDown.OperatorSet> |
getFeatures()
Determine the operators that can be pushed to the loader.
|
org.apache.hadoop.mapreduce.InputFormat |
getInputFormat()
This will be called during planning on the front end.
|
Tuple |
getNext()
Retrieves the next tuple to be processed.
|
org.apache.hadoop.mapreduce.OutputFormat |
getOutputFormat()
Return the OutputFormat associated with StoreFuncInterface.
|
String[] |
getPartitionKeys(String location,
org.apache.hadoop.mapreduce.Job job)
Find what columns are partition keys for this input.
|
ResourceSchema |
getSchema(String location,
org.apache.hadoop.mapreduce.Job job)
Get a schema for the data to be loaded.
|
ResourceStatistics |
getStatistics(String location,
org.apache.hadoop.mapreduce.Job job)
Get statistics about the data to be loaded.
|
int |
hashCode() |
void |
prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
PigSplit split)
Initializes LoadFunc for reading data.
|
void |
prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter writer)
Initialize StoreFuncInterface to write data.
|
LoadPushDown.RequiredFieldResponse |
pushProjection(LoadPushDown.RequiredFieldList requiredFieldList)
Indicate to the loader fields that will be needed.
|
void |
putNext(Tuple f)
Write a tuple to the data store.
|
protected DataByteArray |
readField(byte[] bytes,
int start,
int end)
Read the bytes between start and end into a DataByteArray for inclusion in the return tuple.
|
String |
relToAbsPathForStoreLocation(String location,
org.apache.hadoop.fs.Path curDir)
This method is called by the Pig runtime in the front end to convert the
output location to an absolute path if the location is relative.
|
void |
setLocation(String location,
org.apache.hadoop.mapreduce.Job job)
Communicate to the loader the location of the object(s) being loaded.
|
void |
setPartitionFilter(Expression partitionFilter)
Set the filter for partitioning.
|
void |
setStoreFuncUDFContextSignature(String signature)
This method will be called by Pig both in the front end and back end to
pass a unique signature to the
StoreFuncInterface which it can use to store
information in the UDFContext which it needs to store between
various method invocations in the front end and back end. |
void |
setStoreLocation(String location,
org.apache.hadoop.mapreduce.Job job)
Communicate to the storer the location where the data needs to be stored.
|
void |
setUDFContextSignature(String signature)
This method will be called by Pig both in the front end and back end to
pass a unique signature to the
LoadFunc. |
boolean |
shouldOverwrite()
This method is called by the Pig runtime to determine whether to ignore output validation problems
(see
PigOutputFormat.checkOutputSpecs(org.apache.hadoop.mapreduce.JobContext) and InputOutputFileValidator#validate) and to delete the existing output. |
void |
storeSchema(ResourceSchema schema,
String location,
org.apache.hadoop.mapreduce.Job job)
Store schema of the data being written
|
void |
storeStatistics(ResourceStatistics stats,
String location,
org.apache.hadoop.mapreduce.Job job)
Store statistics about the data being written.
|
getSplitComparablegetAbsolutePath, getCacheFiles, getLoadCaster, getPathStrings, getShipFiles, join, relativeToAbsolutePath, warnprotected org.apache.hadoop.mapreduce.RecordReader in
protected org.apache.hadoop.mapreduce.RecordWriter writer
protected final org.apache.commons.logging.Log mLog
protected String signature
protected ResourceSchema schema
protected LoadCaster caster
protected boolean[] mRequiredColumns
public PigStorage()
public PigStorage(String delimiter)
delimiter - the single byte character that is used to separate fields.
("\t" is the default.)org.apache.commons.cli.ParseExceptionpublic PigStorage(String delimiter, String options)
Understands the following options, which can be specified in the second paramter:
-schema Loads / Stores the schema of the relation using a hidden JSON file.
-noschema Ignores a stored schema during loading.
-tagFile Appends input source file name to beginning of each tuple.
-tagPath Appends input source file path to beginning of each tuple.
delimiter - the single byte character that is used to separate fields.options - a list of options that can be used to modify PigStorage behaviororg.apache.commons.cli.ParseExceptionpublic Tuple getNext() throws IOException
LoadFuncgetNext in class LoadFuncIOException - if there is an exception while retrieving the next
tuplepublic void putNext(Tuple f) throws IOException
StoreFuncInterfaceputNext in interface StoreFuncInterfacef - the tuple to store.IOException - if an exception occurs during the writeprotected DataByteArray readField(byte[] bytes, int start, int end)
bytes - byte array to copy data fromstart - starting point to copy fromend - ending point to copy to, exclusive.public LoadPushDown.RequiredFieldResponse pushProjection(LoadPushDown.RequiredFieldList requiredFieldList) throws FrontendException
LoadPushDownpushProjection in interface LoadPushDownrequiredFieldList - RequiredFieldList indicating which columns will be needed.
This structure is read only. User cannot make change to it inside pushProjection.FrontendExceptionpublic boolean equals(PigStorage other)
public org.apache.hadoop.mapreduce.InputFormat getInputFormat()
LoadFuncgetInputFormat in class LoadFuncpublic void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
PigSplit split)
LoadFuncprepareToRead in class LoadFuncreader - RecordReader to be used by this instance of the LoadFuncsplit - The input PigSplit to processpublic void setLocation(String location, org.apache.hadoop.mapreduce.Job job) throws IOException
LoadFuncLoadFunc.relativeToAbsolutePath(String, Path). Implementations
should use this method to communicate the location (and any other information)
to its underlying InputFormat through the Job object.
This method will be called in the frontend and backend multiple times. Implementations
should bear in mind that this method is called multiple times and should
ensure there are no inconsistent side effects due to the multiple calls.setLocation in class LoadFunclocation - Location as returned by
LoadFunc.relativeToAbsolutePath(String, Path)job - the Job object
store or retrieve earlier stored information from the UDFContextIOException - if the location is not valid.public org.apache.hadoop.mapreduce.OutputFormat getOutputFormat()
StoreFuncInterfacegetOutputFormat in interface StoreFuncInterfaceOutputFormat associated with StoreFuncInterfacepublic void prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter writer)
StoreFuncInterfaceprepareToWrite in interface StoreFuncInterfacewriter - RecordWriter to use.public void setStoreLocation(String location, org.apache.hadoop.mapreduce.Job job) throws IOException
StoreFuncInterfaceStoreFuncInterface here is the
return value of StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)
This method will be called in the frontend and backend multiple times. Implementations
should bear in mind that this method is called multiple times and should
ensure there are no inconsistent side effects due to the multiple calls.
StoreFuncInterface.checkSchema(ResourceSchema) will be called before any call to
StoreFuncInterface.setStoreLocation(String, Job).setStoreLocation in interface StoreFuncInterfacelocation - Location returned by
StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)job - The Job objectIOException - if the location is not valid.public void checkSchema(ResourceSchema s) throws IOException
StoreFuncInterfacecheckSchema in interface StoreFuncInterfaces - to be checkedIOException - if this schema is not acceptable. It should include
a detailed error message indicating what is wrong with the schema.public String relToAbsPathForStoreLocation(String location, org.apache.hadoop.fs.Path curDir) throws IOException
StoreFuncInterfaceLoadFunc.getAbsolutePath(java.lang.String, org.apache.hadoop.fs.Path) provides a default
implementation for hdfs and hadoop local file system and it can be used
to implement this method.relToAbsPathForStoreLocation in interface StoreFuncInterfacelocation - location as provided in the "store" statement of the scriptcurDir - the current working direction based on any "cd" statements
in the script before the "store" statement. If there are no "cd" statements
in the script, this would be the home directory -
/user/
IOException - if the conversion is not possiblepublic void setUDFContextSignature(String signature)
LoadFuncLoadFunc. The signature can be used
to store into the UDFContext any information which the
LoadFunc needs to store between various method invocations in the
front end and back end. A use case is to store LoadPushDown.RequiredFieldList
passed to it in LoadPushDown.pushProjection(RequiredFieldList) for
use in the back end before returning tuples in LoadFunc.getNext().
This method will be call before other methods in LoadFuncsetUDFContextSignature in class LoadFuncsignature - a unique signature to identify this LoadFuncpublic List<LoadPushDown.OperatorSet> getFeatures()
LoadPushDowngetFeatures in interface LoadPushDownpublic void setStoreFuncUDFContextSignature(String signature)
StoreFuncInterfaceStoreFuncInterface which it can use to store
information in the UDFContext which it needs to store between
various method invocations in the front end and back end. This is necessary
because in a Pig Latin script with multiple stores, the different
instances of store functions need to be able to find their (and only their)
data in the UDFContext object.setStoreFuncUDFContextSignature in interface StoreFuncInterfacesignature - a unique signature to identify this StoreFuncInterfacepublic void cleanupOnFailure(String location, org.apache.hadoop.mapreduce.Job job) throws IOException
StoreFuncInterfacecleanupOnFailure in interface StoreFuncInterfacelocation - Location returned by
StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)job - The Job object - this should be used only to obtain
cluster properties through JobContextImpl.getConfiguration() and not to set/query
any runtime job information.IOExceptionpublic void cleanupOnSuccess(String location, org.apache.hadoop.mapreduce.Job job) throws IOException
StoreFuncInterfacecleanupOnSuccess in interface StoreFuncInterfacelocation - Location returned by
StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)job - The Job object - this should be used only to obtain
cluster properties through JobContextImpl.getConfiguration() and not to set/query
any runtime job information.IOExceptionpublic ResourceSchema getSchema(String location, org.apache.hadoop.mapreduce.Job job) throws IOException
LoadMetadatagetSchema in interface LoadMetadatalocation - Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)job - The Job object - this should be used only to obtain
cluster properties through JobContextImpl.getConfiguration() and not to set/query
any runtime job information.IOException - if an exception occurs while determining the schemapublic ResourceStatistics getStatistics(String location, org.apache.hadoop.mapreduce.Job job) throws IOException
LoadMetadataLoadFunc, then LoadFunc.setLocation(String, org.apache.hadoop.mapreduce.Job)
is guaranteed to be called before this method.getStatistics in interface LoadMetadatalocation - Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)job - The Job object - this should be used only to obtain
cluster properties through JobContextImpl.getConfiguration() and not to set/query
any runtime job information.IOException - if an exception occurs while retrieving statisticspublic void setPartitionFilter(Expression partitionFilter) throws IOException
LoadMetadataLoadMetadata.getPartitionKeys(String, Job), then this method is not
called by Pig runtime. This method is also not called by the Pig runtime
if there are no partition filter conditions.setPartitionFilter in interface LoadMetadatapartitionFilter - that describes filter for partitioningIOException - if the filter is not compatible with the storage
mechanism or contains non-partition fields.public String[] getPartitionKeys(String location, org.apache.hadoop.mapreduce.Job job) throws IOException
LoadMetadatagetPartitionKeys in interface LoadMetadatalocation - Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)job - The Job object - this should be used only to obtain
cluster properties through JobContextImpl.getConfiguration() and not to set/query
any runtime job information.IOException - if an exception occurs while retrieving partition keyspublic void storeSchema(ResourceSchema schema, String location, org.apache.hadoop.mapreduce.Job job) throws IOException
StoreMetadatastoreSchema in interface StoreMetadataschema - Schema to be recordedlocation - Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)job - The Job object - this should be used only to obtain
cluster properties through JobContextImpl.getConfiguration() and not to set/query
any runtime job information.IOExceptionpublic void storeStatistics(ResourceStatistics stats, String location, org.apache.hadoop.mapreduce.Job job) throws IOException
StoreMetadatastoreStatistics in interface StoreMetadatastats - statistics to be recordedlocation - Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)job - The Job object - this should be used only to obtain
cluster properties through JobContextImpl.getConfiguration() and not to set/query
any runtime job information.IOExceptionpublic boolean shouldOverwrite()
OverwritableStoreFuncPigOutputFormat.checkOutputSpecs(org.apache.hadoop.mapreduce.JobContext) and InputOutputFileValidator#validate) and to delete the existing output.shouldOverwrite in interface OverwritableStoreFuncStoreFunc.public void cleanupOutput(POStore store, org.apache.hadoop.mapreduce.Job job) throws IOException
OverwritableStoreFuncStoreFunc.cleanupOutput in interface OverwritableStoreFuncstore - The POStore object to get info about the store
operator for this store function.job - The Job object to get job related information.IOException - if an exception occurs during the cleanup.Copyright © 2007-2017 The Apache Software Foundation