Uses of Interface
org.apache.hadoop.mapreduce.JobContext

Packages that use JobContext
org.apache.hadoop.mapred A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner. 
org.apache.hadoop.mapreduce   
org.apache.hadoop.mapreduce.lib.db org.apache.hadoop.mapred.lib.db Package 
org.apache.hadoop.mapreduce.lib.input   
org.apache.hadoop.mapreduce.lib.map   
org.apache.hadoop.mapreduce.lib.output   
org.apache.hadoop.mapreduce.lib.partition   
org.apache.hadoop.mapreduce.lib.reduce   
org.apache.hadoop.mapreduce.task   
 

Uses of JobContext in org.apache.hadoop.mapred
 

Subinterfaces of JobContext in org.apache.hadoop.mapred
 interface JobContext
           
 

Methods in org.apache.hadoop.mapred with parameters of type JobContext
 void OutputCommitter.abortJob(JobContext context, JobStatus.State runState)
          This method implements the new interface by calling the old method.
 void OutputCommitter.cleanupJob(JobContext context)
          Deprecated. 
 void OutputCommitter.commitJob(JobContext context)
          This method implements the new interface by calling the old method.
static int LocalJobRunner.getLocalMaxRunningMaps(JobContext job)
           
static void LocalJobRunner.setLocalMaxRunningMaps(JobContext job, int maxMaps)
          Set the max number of map tasks to run concurrently in the LocalJobRunner.
 void OutputCommitter.setupJob(JobContext jobContext)
          This method implements the new interface by calling the old method.
 

Uses of JobContext in org.apache.hadoop.mapreduce
 

Subinterfaces of JobContext in org.apache.hadoop.mapreduce
 interface MapContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
          The context that is given to the Mapper.
 interface ReduceContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
          The context passed to the Reducer.
 interface TaskAttemptContext
          The context for task attempts.
 interface TaskInputOutputContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
          A context object that allows input and output from the task.
 

Classes in org.apache.hadoop.mapreduce that implement JobContext
 class Job
          The job submitter's view of the Job.
 class Mapper.Context
          The Context passed on to the Mapper implementations.
 class Reducer.Context
          The Context passed on to the Reducer implementations.
 

Methods in org.apache.hadoop.mapreduce that return JobContext
static JobContext ContextFactory.cloneContext(JobContext original, org.apache.hadoop.conf.Configuration conf)
          Clone a JobContext or TaskAttemptContext with a new configuration.
 

Methods in org.apache.hadoop.mapreduce with parameters of type JobContext
 void OutputCommitter.abortJob(JobContext jobContext, JobStatus.State state)
          For aborting an unsuccessful job's output.
abstract  void OutputFormat.checkOutputSpecs(JobContext context)
          Check for validity of the output-specification for the job.
 void OutputCommitter.cleanupJob(JobContext context)
          Deprecated. use OutputCommitter.commitJob(JobContext) or OutputCommitter.abortJob(JobContext, JobStatus.State) instead
static JobContext ContextFactory.cloneContext(JobContext original, org.apache.hadoop.conf.Configuration conf)
          Clone a JobContext or TaskAttemptContext with a new configuration.
 void OutputCommitter.commitJob(JobContext jobContext)
          For cleaning up the job's output after job completion.
abstract  java.util.List<InputSplit> InputFormat.getSplits(JobContext context)
          Logically split the set of input files for the job.
abstract  void OutputCommitter.setupJob(JobContext jobContext)
          For the framework to setup the job output during initialization
 

Uses of JobContext in org.apache.hadoop.mapreduce.lib.db
 

Methods in org.apache.hadoop.mapreduce.lib.db with parameters of type JobContext
 void DBOutputFormat.checkOutputSpecs(JobContext context)
           
 java.util.List<InputSplit> DBInputFormat.getSplits(JobContext job)
          Logically split the set of input files for the job.
 java.util.List<InputSplit> DataDrivenDBInputFormat.getSplits(JobContext job)
          Logically split the set of input files for the job.
 

Uses of JobContext in org.apache.hadoop.mapreduce.lib.input
 

Methods in org.apache.hadoop.mapreduce.lib.input with parameters of type JobContext
static org.apache.hadoop.fs.PathFilter FileInputFormat.getInputPathFilter(JobContext context)
          Get a PathFilter instance of the filter set for the input paths.
static org.apache.hadoop.fs.Path[] FileInputFormat.getInputPaths(JobContext context)
          Get the list of input Paths for the map-reduce job.
static long FileInputFormat.getMaxSplitSize(JobContext context)
          Get the maximum split size.
static long FileInputFormat.getMinSplitSize(JobContext job)
          Get the minimum split size
static int NLineInputFormat.getNumLinesPerSplit(JobContext job)
          Get the number of lines per split
 java.util.List<InputSplit> FileInputFormat.getSplits(JobContext job)
          Generate the list of files and make them into FileSplits.
 java.util.List<InputSplit> NLineInputFormat.getSplits(JobContext job)
          Logically splits the set of input files for the job, splits N lines of the input as one split.
 java.util.List<InputSplit> CombineFileInputFormat.getSplits(JobContext job)
           
 java.util.List<InputSplit> DelegatingInputFormat.getSplits(JobContext job)
           
protected  boolean FileInputFormat.isSplitable(JobContext context, org.apache.hadoop.fs.Path filename)
          Is the given filename splitable? Usually, true, but if the file is stream compressed, it will not be.
protected  boolean TextInputFormat.isSplitable(JobContext context, org.apache.hadoop.fs.Path file)
           
protected  boolean KeyValueTextInputFormat.isSplitable(JobContext context, org.apache.hadoop.fs.Path file)
           
protected  boolean CombineFileInputFormat.isSplitable(JobContext context, org.apache.hadoop.fs.Path file)
           
protected  java.util.List<org.apache.hadoop.fs.FileStatus> FileInputFormat.listStatus(JobContext job)
          List input directories.
protected  java.util.List<org.apache.hadoop.fs.FileStatus> SequenceFileInputFormat.listStatus(JobContext job)
           
 

Uses of JobContext in org.apache.hadoop.mapreduce.lib.map
 

Classes in org.apache.hadoop.mapreduce.lib.map that implement JobContext
 class WrappedMapper.Context
           
 

Methods in org.apache.hadoop.mapreduce.lib.map with parameters of type JobContext
static
<K1,V1,K2,V2>
java.lang.Class<Mapper<K1,V1,K2,V2>>
MultithreadedMapper.getMapperClass(JobContext job)
          Get the application's mapper class.
static int MultithreadedMapper.getNumberOfThreads(JobContext job)
          The number of threads in the thread pool that will run the map function.
 

Uses of JobContext in org.apache.hadoop.mapreduce.lib.output
 

Methods in org.apache.hadoop.mapreduce.lib.output with parameters of type JobContext
 void FileOutputCommitter.abortJob(JobContext context, JobStatus.State state)
          Delete the temporary directory, including all of the work directories.
 void SequenceFileAsBinaryOutputFormat.checkOutputSpecs(JobContext job)
           
 void LazyOutputFormat.checkOutputSpecs(JobContext context)
           
 void NullOutputFormat.checkOutputSpecs(JobContext context)
           
 void FileOutputFormat.checkOutputSpecs(JobContext job)
           
 void FilterOutputFormat.checkOutputSpecs(JobContext context)
           
 void FileOutputCommitter.cleanupJob(JobContext context)
          Deprecated. 
 void FileOutputCommitter.commitJob(JobContext context)
          Delete the temporary directory, including all of the work directories.
static boolean FileOutputFormat.getCompressOutput(JobContext job)
          Is the job output compressed?
static boolean MultipleOutputs.getCountersEnabled(JobContext job)
          Returns if the counters for the named outputs are enabled or not.
static org.apache.hadoop.io.SequenceFile.CompressionType SequenceFileOutputFormat.getOutputCompressionType(JobContext job)
          Get the SequenceFile.CompressionType for the output SequenceFile.
static java.lang.Class<? extends org.apache.hadoop.io.compress.CompressionCodec> FileOutputFormat.getOutputCompressorClass(JobContext job, java.lang.Class<? extends org.apache.hadoop.io.compress.CompressionCodec> defaultValue)
          Get the CompressionCodec for compressing the job outputs.
protected static java.lang.String FileOutputFormat.getOutputName(JobContext job)
          Get the base output name for the output file.
static org.apache.hadoop.fs.Path FileOutputFormat.getOutputPath(JobContext job)
          Get the Path to the output directory for the map-reduce job.
static java.lang.Class<? extends org.apache.hadoop.io.WritableComparable> SequenceFileAsBinaryOutputFormat.getSequenceFileOutputKeyClass(JobContext job)
          Get the key class for the SequenceFile
static java.lang.Class<? extends org.apache.hadoop.io.Writable> SequenceFileAsBinaryOutputFormat.getSequenceFileOutputValueClass(JobContext job)
          Get the value class for the SequenceFile
protected static void FileOutputFormat.setOutputName(JobContext job, java.lang.String name)
          Set the base output name for output file to be created.
 void FileOutputCommitter.setupJob(JobContext context)
          Create the temporary directory that is the root of all of the task work directories.
 

Uses of JobContext in org.apache.hadoop.mapreduce.lib.partition
 

Methods in org.apache.hadoop.mapreduce.lib.partition with parameters of type JobContext
static java.lang.String KeyFieldBasedComparator.getKeyFieldComparatorOption(JobContext job)
          Get the KeyFieldBasedComparator options
 java.lang.String KeyFieldBasedPartitioner.getKeyFieldPartitionerOption(JobContext job)
          Get the KeyFieldBasedPartitioner options
 

Uses of JobContext in org.apache.hadoop.mapreduce.lib.reduce
 

Classes in org.apache.hadoop.mapreduce.lib.reduce that implement JobContext
 class WrappedReducer.Context
           
 

Uses of JobContext in org.apache.hadoop.mapreduce.task
 

Classes in org.apache.hadoop.mapreduce.task that implement JobContext
 class JobContextImpl
          A read-only view of the job that is provided to the tasks while they are running.
 class MapContextImpl<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
          The context that is given to the Mapper.
 class ReduceContextImpl<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
          The context passed to the Reducer.
 class TaskAttemptContextImpl
          The context for task attempts.
 class TaskInputOutputContextImpl<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
          A context object that allows input and output from the task.
 



Copyright © 2009 The Apache Software Foundation