public class FixedWidthLoader extends LoadFunc implements LoadMetadata, LoadPushDown
| Modifier and Type | Class and Description |
|---|---|
static class |
FixedWidthLoader.FixedWidthField |
LoadPushDown.OperatorSet, LoadPushDown.RequiredField, LoadPushDown.RequiredFieldList, LoadPushDown.RequiredFieldResponse| Constructor and Description |
|---|
FixedWidthLoader() |
FixedWidthLoader(String columnSpec) |
FixedWidthLoader(String columnSpec,
String skipHeaderStr) |
FixedWidthLoader(String columnSpec,
String skipHeaderStr,
String schemaStr) |
| Modifier and Type | Method and Description |
|---|---|
List<LoadPushDown.OperatorSet> |
getFeatures()
Determine the operators that can be pushed to the loader.
|
org.apache.hadoop.mapreduce.InputFormat |
getInputFormat()
This will be called during planning on the front end.
|
Tuple |
getNext()
Retrieves the next tuple to be processed.
|
String[] |
getPartitionKeys(String location,
org.apache.hadoop.mapreduce.Job job)
Find what columns are partition keys for this input.
|
ResourceSchema |
getSchema(String location,
org.apache.hadoop.mapreduce.Job job)
Get a schema for the data to be loaded.
|
ResourceStatistics |
getStatistics(String location,
org.apache.hadoop.mapreduce.Job job)
Get statistics about the data to be loaded.
|
static ArrayList<FixedWidthLoader.FixedWidthField> |
parseColumnSpec(String spec) |
void |
prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
PigSplit split)
Initializes LoadFunc for reading data.
|
LoadPushDown.RequiredFieldResponse |
pushProjection(LoadPushDown.RequiredFieldList requiredFieldList)
Indicate to the loader fields that will be needed.
|
void |
setLocation(String location,
org.apache.hadoop.mapreduce.Job job)
Communicate to the loader the location of the object(s) being loaded.
|
void |
setPartitionFilter(Expression partitionFilter)
Set the filter for partitioning.
|
void |
setUDFContextSignature(String signature)
This method will be called by Pig both in the front end and back end to
pass a unique signature to the
LoadFunc. |
getAbsolutePath, getCacheFiles, getLoadCaster, getPathStrings, getShipFiles, join, relativeToAbsolutePath, warnpublic FixedWidthLoader()
public FixedWidthLoader(String columnSpec)
public static ArrayList<FixedWidthLoader.FixedWidthField> parseColumnSpec(String spec)
public org.apache.hadoop.mapreduce.InputFormat getInputFormat()
throws IOException
LoadFuncgetInputFormat in class LoadFuncIOException - if there is an exception during InputFormat
constructionpublic void setLocation(String location, org.apache.hadoop.mapreduce.Job job) throws IOException
LoadFuncLoadFunc.relativeToAbsolutePath(String, Path). Implementations
should use this method to communicate the location (and any other information)
to its underlying InputFormat through the Job object.
This method will be called in the frontend and backend multiple times. Implementations
should bear in mind that this method is called multiple times and should
ensure there are no inconsistent side effects due to the multiple calls.setLocation in class LoadFunclocation - Location as returned by
LoadFunc.relativeToAbsolutePath(String, Path)job - the Job object
store or retrieve earlier stored information from the UDFContextIOException - if the location is not valid.public void setUDFContextSignature(String signature)
LoadFuncLoadFunc. The signature can be used
to store into the UDFContext any information which the
LoadFunc needs to store between various method invocations in the
front end and back end. A use case is to store LoadPushDown.RequiredFieldList
passed to it in LoadPushDown.pushProjection(RequiredFieldList) for
use in the back end before returning tuples in LoadFunc.getNext().
This method will be call before other methods in LoadFuncsetUDFContextSignature in class LoadFuncsignature - a unique signature to identify this LoadFuncpublic ResourceSchema getSchema(String location, org.apache.hadoop.mapreduce.Job job) throws IOException
LoadMetadatagetSchema in interface LoadMetadatalocation - Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)job - The Job object - this should be used only to obtain
cluster properties through JobContextImpl.getConfiguration() and not to set/query
any runtime job information.IOException - if an exception occurs while determining the schemapublic void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
PigSplit split)
throws IOException
LoadFuncprepareToRead in class LoadFuncreader - RecordReader to be used by this instance of the LoadFuncsplit - The input PigSplit to processIOException - if there is an exception during initializationpublic Tuple getNext() throws IOException
LoadFuncgetNext in class LoadFuncIOException - if there is an exception while retrieving the next
tuplepublic LoadPushDown.RequiredFieldResponse pushProjection(LoadPushDown.RequiredFieldList requiredFieldList) throws FrontendException
LoadPushDownpushProjection in interface LoadPushDownrequiredFieldList - RequiredFieldList indicating which columns will be needed.
This structure is read only. User cannot make change to it inside pushProjection.FrontendExceptionpublic List<LoadPushDown.OperatorSet> getFeatures()
LoadPushDowngetFeatures in interface LoadPushDownpublic ResourceStatistics getStatistics(String location, org.apache.hadoop.mapreduce.Job job) throws IOException
LoadMetadataLoadFunc, then LoadFunc.setLocation(String, org.apache.hadoop.mapreduce.Job)
is guaranteed to be called before this method.getStatistics in interface LoadMetadatalocation - Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)job - The Job object - this should be used only to obtain
cluster properties through JobContextImpl.getConfiguration() and not to set/query
any runtime job information.IOException - if an exception occurs while retrieving statisticspublic String[] getPartitionKeys(String location, org.apache.hadoop.mapreduce.Job job) throws IOException
LoadMetadatagetPartitionKeys in interface LoadMetadatalocation - Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)job - The Job object - this should be used only to obtain
cluster properties through JobContextImpl.getConfiguration() and not to set/query
any runtime job information.IOException - if an exception occurs while retrieving partition keyspublic void setPartitionFilter(Expression partitionFilter) throws IOException
LoadMetadataLoadMetadata.getPartitionKeys(String, Job), then this method is not
called by Pig runtime. This method is also not called by the Pig runtime
if there are no partition filter conditions.setPartitionFilter in interface LoadMetadatapartitionFilter - that describes filter for partitioningIOException - if the filter is not compatible with the storage
mechanism or contains non-partition fields.Copyright © 2007-2017 The Apache Software Foundation