【hadoop代码笔记】Mapreduce shuffle过程之Map输出过程

一、概要描述

shuffle是MapReduce的一个核心过程,因此没有在前面的MapReduce作业提交的过程中描述,而是单独拿出来比较详细的描述。 根据官方的流程图示如下:

本篇文章中只是想尝试从代码分析来说明在map端是如何将map的输出保存下来等待reduce来取。 在执行每个map task时,无论map方法中执行什么逻辑,最终都是要把输出写到磁盘上。如果没有reduce阶段,则直接输出到hdfs上,如果有有reduce作业,则每个map方法的输出在写磁盘前线在内存中缓存。每个map task都有一个环状的内存缓冲区,存储着map的输出结果,默认100m,在每次当缓冲区快满的时候由一个独立的线程将缓冲区的数据以一个溢出文件的方式存放到磁盘,当整个map task结束后再对磁盘中这个map task产生的所有溢出文件做合并,被合并成已分区且已排序的输出文件。然后等待reduce task来拉数据。

二、 流程描述

  1. 在child进程调用到runNewMapper时,会设置output为NewOutputCollector,来负责map的输出。
  2. 在map方法的最后,不管经过什么逻辑的map处理,最终一般都要调用到TaskInputOutputContext的write方法,进而调用到设置的output即NewOutputCollector的write方法
  3. NewOutputCollector其实只是对MapOutputBuffer的一个封装,其write方法调用的是MapOutputBuffer的collect方法。
  4. MapOutputBuffer的collect方法中把key和value序列化后存储在一个环形缓存中,如果缓存满了则会调用startspill方法设置信号量,使得一个独立的线程SpillThread可以对缓存中的数据进行处理。
  5. SpillThread线程的run方法中调用sortAndSpill方法对缓存中的数据进行排序后写溢出文件。
  6. 当map输出完成后,会调用output的close方法。
  7. 在close方法中调用flush方法,对剩余的缓存进行处理,最后调用mergeParts方法,将前面过程的多个溢出文件合并为一个。

​ Mapreduce shuffle过程之Map输出过程代码流程

三、代码详细

1. MapTask的runNewMapper方法

注意到有这样一段代码。即当job中只有map没有reduce的时候,这个rg.apache.hadoop.mapreduce.RecordWriter类型的对象 output是一Outputformat中定义的writer,即直接写到输出中。如果是有Reduce,则output是一个NewOutputCollector类型输出。

1if (job.getNumReduceTasks() == 0) {
2        output = outputFormat.getRecordWriter(taskContext);
3      } else {
4        output = new NewOutputCollector(taskContext, job, umbilical, reporter);
5      }
6      mapperContext = contextConstructor.newInstance(mapper, job, getTaskID(),input, output, committer,                                                 reporter, split);
7      input.initialize(split, mapperContext);
8      mapper.run(mapperContext);

和其他的RecordWriter一样,NewOutputCollector也继承自RecordWriter抽象类。除了一个close方法释放资源外,该抽象类定义的最主要的方法就一个void write(K key, V )。即写入key,value。

2. Mapper的run方法,对每个输出执行map方法。

1    public void run(Context context) throws IOException, InterruptedException {
2    setup(context);
3    while (context.nextKeyValue()) {
4  map(context.getCurrentKey(), context.getCurrentValue(), context);
5    }
6    cleanup(context);

3. Mapper的map方法,默认是直接把key和value写入

1protected void map(KEYIN key, VALUEIN value, 
2                     Context context) throws IOException, InterruptedException {
3    context.write((KEYOUT) key, (VALUEOUT) value);
4  }

一般使用中会做很多我们需要的操作,如著名的wordcount中,把一行单词切分后,数一(value都设为_one_ = new IntWritable(1)),但最终都是要把结果写入。即调用context.write(key,value)

1public void map(Object key, Text value, Context context
2                    ) throws IOException, InterruptedException {
3      StringTokenizer itr = new StringTokenizer(value.toString());
4      while (itr.hasMoreTokens()) {
5        word.set(itr.nextToken());
6        context.write(word, one);
7      }
8    }

4. TaskInputOutputContext的write方法。

调用的是contex中的RecordWriter的write方法。即调用的是NewOutputCollector的write方法。

1public void write(KEYOUT key, VALUEOUT value
2                    ) throws IOException, InterruptedException {
3    output.write(key, value);
4}

5. NewOutputCollector的write方法

1public void write(K key, V value) throws IOException, InterruptedException {
2      collector.collect(key, value,
3                        partitioner.getPartition(key, value, partitions));
4}

从方法名上不难看出提供写数据的是MapOutputCollector<K,V>类型的 collector对象.从NewOutputCollector的构造函数中看到collector的初始化。

1collector = new MapOutputBuffer<K,V>(umbilical, job, reporter);

6. MapOutputBuffer的构造函数

在了解MapOutputBuffer的collect方法前,先了解下期构造函数,看做了哪些初始化。

 1public MapOutputBuffer(TaskUmbilicalProtocol umbilical, JobConf job,
 2                           TaskReporter reporter
 3                           ) throws IOException, ClassNotFoundException {
 4      this.job = job;
 5      this.reporter = reporter;
 6      localFs = FileSystem.getLocal(job);
 7     //1)设定map的分区数,即作业 配置中的的reduce数
 8      partitions = job.getNumReduceTasks();
 9      rfs = ((LocalFileSystem)localFs).getRaw();
10      indexCacheList = new ArrayList<SpillRecord>();
11    
12      //2)重要的参数
13      final float spillper = job.getFloat("io.sort.spill.percent",(float)0.8);
14      final float recper = job.getFloat("io.sort.record.percent",(float)0.05);
15      final int sortmb = job.getInt("io.sort.mb", 100);
16      if (spillper > (float)1.0 || spillper < (float)0.0) {
17        throw new IOException("Invalid \"io.sort.spill.percent\": " + spillper);
18      }
19      if (recper > (float)1.0 || recper < (float)0.01) {
20        throw new IOException("Invalid \"io.sort.record.percent\": " + recper);
21      }
22      if ((sortmb & 0x7FF) != sortmb) {
23        throw new IOException("Invalid \"io.sort.mb\": " + sortmb);
24      }
25      //3)sorter,使用其对map的输出在partition内进行内排序。
26      sorter = ReflectionUtils.newInstance(
27            job.getClass("map.sort.class", QuickSort.class, IndexedSorter.class), job);
28      LOG.info("io.sort.mb = " + sortmb);
29      // buffers and accounting
30     //把单位是M的sortmb设定左移20,还原单位为个
31      int maxMemUsage = sortmb << 20;
32      int recordCapacity = (int)(maxMemUsage * recper);
33      recordCapacity -= recordCapacity % RECSIZE;
34      //输出缓存
35      kvbuffer = new byte[maxMemUsage - recordCapacity];
36      bufvoid = kvbuffer.length;
37      recordCapacity /= RECSIZE;
38      kvoffsets = new int[recordCapacity];
39      kvindices = new int[recordCapacity * ACCTSIZE];
40      softBufferLimit = (int)(kvbuffer.length * spillper);
41      softRecordLimit = (int)(kvoffsets.length * spillper);
42      LOG.info("data buffer = " + softBufferLimit + "/" + kvbuffer.length);
43      LOG.info("record buffer = " + softRecordLimit + "/" + kvoffsets.length);
44      // k/v serialization
45      comparator = job.getOutputKeyComparator();
46      keyClass = (Class<K>)job.getMapOutputKeyClass();
47      valClass = (Class<V>)job.getMapOutputValueClass();
48      serializationFactory = new SerializationFactory(job);
49      keySerializer = serializationFactory.getSerializer(keyClass);
50      keySerializer.open(bb);
51      valSerializer = serializationFactory.getSerializer(valClass);
52      valSerializer.open(bb);
53      // counters
54      mapOutputByteCounter = reporter.getCounter(MAP_OUTPUT_BYTES);
55      mapOutputRecordCounter = reporter.getCounter(MAP_OUTPUT_RECORDS);
56      Counters.Counter combineInputCounter = 
57        reporter.getCounter(COMBINE_INPUT_RECORDS);
58      combineOutputCounter = reporter.getCounter(COMBINE_OUTPUT_RECORDS);
59      // 4)compression
60      if (job.getCompressMapOutput()) {
61        Class<? extends CompressionCodec> codecClass =
62          job.getMapOutputCompressorClass(DefaultCodec.class);
63        codec = ReflectionUtils.newInstance(codecClass, job);
64      }
65      // 5)combiner是一个NewCombinerRunner类型,调用Job的reducer来对map的输出在map端进行combine。
66      combinerRunner = CombinerRunner.create(job, getTaskID(), 
67                                             combineInputCounter,
68                                             reporter, null);
69      if (combinerRunner != null) {
70        combineCollector= new CombineOutputCollector<K,V>(combineOutputCounter);
71      } else {
72        combineCollector = null;
73      }
74      //6)启动一个SpillThread线程来
75      minSpillsForCombine = job.getInt("min.num.spills.for.combine", 3);
76      spillThread.setDaemon(true);
77      spillThread.setName("SpillThread");
78      spillLock.lock();
79      try {
80        spillThread.start();
81        while (!spillThreadRunning) {
82          spillDone.await();
83        }
84      } catch (InterruptedException e) {
85        throw (IOException)new IOException("Spill thread failed to initialize"
86            ).initCause(sortSpillException);
87      } finally {
88        spillLock.unlock();
89      }
90      if (sortSpillException != null) {
91        throw (IOException)new IOException("Spill thread failed to initialize"
92            ).initCause(sortSpillException);
93      }
94}

7. MapOutputBuffer的collect方法。

参数partition是partitioner根据key计算得到的当前key value属于的partition索引。写key和value写入缓存,当缓存满足spill条件时,通过调用startSpill方法设置变量并通过spillReady.signal(),通知spillThread;并等待spill结束(通过spillDone.await()等待)缓冲区的作用是批量收集map结果,减少磁盘IO的影响。key/value对以及Partition的结果都会被写入缓冲区。写入之前,key与value值都会被序列化成字节数组。kvindices保持了记录所属的分区,key在缓冲区开始的位置和value在缓冲区开始的位置,通过kvindices,可以在缓冲区中找到对应的记录。 输出缓冲区中,和kvstart,kvend和kvindex对应的是bufstart,bufend和bufmark。这部分还涉及到变量bufvoid,用于表明实际使用的缓冲区结尾和变量bufmark,用于标记记录的结尾。需要bufmark,是因为key或value的输出是变长的。

Key Value序列化后缓存

Key Value序列化后缓存

 1public synchronized void collect(K key, V value, int partition
 2                                     ) throws IOException {
 3      reporter.progress();
 4      if (key.getClass() != keyClass) {
 5        throw new IOException("Type mismatch in key from map: expected "
 6                              + keyClass.getName() + ", recieved "
 7                              + key.getClass().getName());
 8      }
 9      if (value.getClass() != valClass) {
10        throw new IOException("Type mismatch in value from map: expected "
11                              + valClass.getName() + ", recieved "
12                              + value.getClass().getName());
13      }
14//对kvoffsets的长度取模,暗示我们这是一个环形缓存。
15final int kvnext = (kvindex + 1) % kvoffsets.length;
16     //进入临界区 
17spillLock.lock();
18      try {
19        boolean kvfull;
20        do {
21          if (sortSpillException != null) {
22            throw (IOException)new IOException("Spill failed"
23                ).initCause(sortSpillException);
24          }
25          // sufficient acct space
26          kvfull = kvnext == kvstart;
27          final boolean kvsoftlimit = ((kvnext > kvend)
28              ? kvnext - kvend > softRecordLimit
29              : kvend - kvnext <= kvoffsets.length - softRecordLimit);
30          if (kvstart == kvend && kvsoftlimit) {
31            LOG.info("Spilling map output: record full = " + kvsoftlimit);
32//其实是设置变量并通过spillReady.signal(),通知spillThread;并等待spill结束
33            startSpill();
34          }
35          if (kvfull) {
36            try {
37              while (kvstart != kvend) {
38//kvstart不等于kvend,表示系统正在spill,等待spillDone信号
39                reporter.progress();
40                spillDone.await();
41              }
42            } catch (InterruptedException e) {
43              throw (IOException)new IOException(
44                  "Collector interrupted while waiting for the writer"
45                  ).initCause(e);
46            }
47          }
48        } while (kvfull);
49      } finally {
50        spillLock.unlock();
51      }
52
53      try {
54//先对key串行化,然后对value做串行化,临时变量keystart,valstart和valend分别记录了key结果的开始位置,value结果的开始位置和value结果的结束位置。串行化过程中,往缓冲区写是最终调用了Buffer.write方法
55        // serialize key bytes into buffer
56        int keystart = bufindex;
57        keySerializer.serialize(key);
58        if (bufindex < keystart) {
59          //如果key串行化后出现bufindex < keystart,那么会调用BlockingBuffer的reset方法。原因是在spill的过程中需要对<key,value>排序,这种情况下,传递给RawComparator的必须是连续的二进制缓冲区,通过BlockingBuffer.reset方法   会把bufvoid设置为bufmark,缓冲区开始部分往后挪,然后将原来位于bufmark到bufvoid出的结果,拷到缓冲区开始处,这样的话,key串行化的结果就连续存放在缓冲区的最开始处。       
60        bb.reset();
61          keystart = 0;
62        }
63        // serialize value bytes into buffer
64        final int valstart = bufindex;
65        valSerializer.serialize(value);
66        int valend = bb.markRecord();
67
68        if (partition < 0 || partition >= partitions) {
69          throw new IOException("Illegal partition for " + key + " (" +
70              partition + ")");
71        }
72    
73        mapOutputRecordCounter.increment(1);
74        mapOutputByteCounter.increment(valend >= keystart
75            ? valend - keystart
76            : (bufvoid - keystart) + valend);
77    
78        // update accounting info
79        int ind = kvindex * ACCTSIZE;
80        kvoffsets[kvindex] = ind;
81        kvindices[ind + PARTITION] = partition;
82        kvindices[ind + KEYSTART] = keystart;
83        kvindices[ind + VALSTART] = valstart;
84        kvindex = kvnext;
85      } catch (MapBufferTooSmallException e) {
86        LOG.info("Record too large for in-memory buffer: " + e.getMessage());
87//如果value的串行化结果太大,不能一次放入缓冲区
88        spillSingleRecord(key, value, partition);
89        mapOutputRecordCounter.increment(1);
90        return;
91      }
92}

8. MapOutputBuffer.BlockingBuffer的reset()方法.

如果key串行化后出现bufindex < keystart,那么会调用BlockingBuffer的reset方法。原因是在spill的过程中需要对<key,value>排序,这种情况下,传递给RawComparator的必须是连续的二进制缓冲区,通过BlockingBuffer.reset方法当发现key的串行化结果出现不连续的情况时,会把bufvoid设置为bufmark,缓冲区开始部分往后挪,然后将原来位于bufmark到bufvoid出的结果,拷到缓冲区开始处,这样的话,key串行化的结果就连续存放在缓冲区的最开始处。

BlockingBuffer.reset方法

BlockingBuffer.reset方法

bufstart前面的缓冲区如果不能放下整个key串行化的结果,,处理的方式是将bufindex置0,然后调用BlockingBuffer内部的out的write方法直接输出,这实际调用了Buffer.write方法,会启动spill过程,最终会成功写入key串行化的结果。

 1protected synchronized void reset() throws IOException {
 2
 3        int headbytelen = bufvoid - bufmark;
 4        bufvoid = bufmark;
 5        //当发现key的串行化结果出现不连续的情况时,会把bufvoid设置为bufmark,缓冲区开始部分往后挪,然后将原来位于bufmark到bufvoid出的结果,拷到缓冲区开始处,这样的话,key串行化的结果就连续存放在缓冲区的最开始处。
 6    
 7        if (bufindex + headbytelen < bufstart) {
 8          System.arraycopy(kvbuffer, 0, kvbuffer, headbytelen, bufindex);
 9          System.arraycopy(kvbuffer, bufvoid, kvbuffer, 0, headbytelen);
10          bufindex += headbytelen;
11        } else {
12         //bufstart前面的缓冲区如果不能够放下整个key串行化的结果,处理的方式是将bufindex置0,然后调用BlockingBuffer内部的out的write方法直接输出
13          byte[] keytmp = new byte[bufindex];
14          System.arraycopy(kvbuffer, 0, keytmp, 0, bufindex);
15          bufindex = 0;
16          out.write(kvbuffer, bufmark, headbytelen);
17          out.write(keytmp);
18        }
19      }
20    }

9. MapOutputBuffer.Buffer的write方法。

在key和value序列列化的时候,被调用写到缓存中。如果spill线程正在把缓存的数据写溢出文件,则阻塞。

 1public synchronized void write(byte b[], int off, int len)
 2          throws IOException {
 3        boolean buffull = false;
 4        boolean wrap = false;
 5        spillLock.lock();
 6        try {
 7          do {//循环,直到有足够的空间可以写数据
 8            if (sortSpillException != null) {
 9              throw (IOException)new IOException("Spill failed"
10                  ).initCause(sortSpillException);
11            }
12
13            // sufficient buffer space?
14            if (bufstart <= bufend && bufend <= bufindex) {
15              buffull = bufindex + len > bufvoid;
16              wrap = (bufvoid - bufindex) + bufstart > len;
17            } else {
18              // bufindex <= bufstart <= bufend
19              // bufend <= bufindex <= bufstart
20              wrap = false;
21              buffull = bufindex + len > bufstart;
22            }
23    
24            if (kvstart == kvend) {
25              // spill thread not running
26              if (kvend != kvindex) {
27                //如果数组中有记录(kvend != kvindex),那么,根据需要(目前输出空间不足或记录数达到spill条件)启动spill过程
28                final boolean bufsoftlimit = (bufindex > bufend)
29                  ? bufindex - bufend > softBufferLimit
30                  : bufend - bufindex < bufvoid - softBufferLimit;
31                if (bufsoftlimit || (buffull && !wrap)) {
32                  LOG.info("Spilling map output: buffer full= " + bufsoftlimit);
33                  startSpill();
34                }
35              } else if (buffull && !wrap) {
36               // 如果空间不够(buffull && !wrap),但是缓存中没有记录,表明这个记录非常大,内存缓冲区不能容下这么大的数据量,抛MapBufferTooSmallException异常,直接写文件不用写缓存
37                final int size = ((bufend <= bufindex)
38                  ? bufindex - bufend
39                  : (bufvoid - bufend) + bufindex) + len;
40                bufstart = bufend = bufindex = bufmark = 0;
41                kvstart = kvend = kvindex = 0;
42                bufvoid = kvbuffer.length;
43                throw new MapBufferTooSmallException(size + " bytes");
44              }
45            }
46    
47            if (buffull && !wrap) {
48              try {
49              //如果空间不足但是spill在运行,等待spillDone
50                while (kvstart != kvend) {
51                  reporter.progress();
52                  spillDone.await();
53                }
54              } catch (InterruptedException e) {
55                  throw (IOException)new IOException(
56                      "Buffer interrupted while waiting for the writer"
57                      ).initCause(e);
58              }
59            }
60          } while (buffull && !wrap);
61        } finally {
62          spillLock.unlock();
63        }
64        //真正把数据写缓存的地方!如果buffull,则写数据会不连续,则写满剩余缓冲区,然后设置bufindex=0,并从bufindex处接着写。否则,就是从bufindex处开始写。
65        if (buffull) {
66          //缓存剩余长度
67          final int gaplen = bufvoid - bufindex;
68          //把剩余的写满
69          System.arraycopy(b, off, kvbuffer, bufindex, gaplen);
70          //剩下长度
71          len -= gaplen;
72          //剩下偏移
73          off += gaplen;
74         //写指针移到开头
75          bufindex = 0;
76        }
77        从指定的开头写
78        System.arraycopy(b, off, kvbuffer, bufindex, len);
79        bufindex += len;
80      }
81    }

​ buffull和wrap条件说明

如图,对bufful和wrap条件进行说明: 在上面两种情况下,即情况1和情况2,

1buffull = bufindex + len > bufvoid;
2wrap = (bufvoid - bufindex) + bufstart > len;

buffull条件判断为从下次写指针的位置bufindex到缓存结束bufvoid的空间是否有足够的空间容纳写的内容,wrap是图中白颜色部分的空间(前后空白合在一起)是否比输入大,如果是,wrap为true; 情况3和情况4中,

1wrap = false;
2buffull = bufindex + len > bufstart;

buffull判断bufindex到bufstart的空间是否满足条件,而wrap肯定是false。 条件(buffull && !wrap)满足时,目前的空间不够一次写。

10. MapOutputBuffer 的spillSingleRecord方法。

如果在collect方法中处理缓存失败,则直接把这条记录些到spill文件中。对应单条记录即使设置了combiner也不用。如果记录非常大,内存缓冲区不能容下这么大的数据量,抛MapBufferTooSmallException异常,直接写文件不用写缓存。

 1private void spillSingleRecord(final K key, final V value,
 2                                   int partition) throws IOException {
 3      long size = kvbuffer.length + partitions * APPROX_HEADER_LENGTH;
 4      FSDataOutputStream out = null;
 5      try {
 6        // 创建spill文件
 7        final SpillRecord spillRec = new SpillRecord(partitions);
 8        final Path filename = mapOutputFile.getSpillFileForWrite(getTaskID(),
 9            numSpills, size);
10        out = rfs.create(filename);
11
12        IndexRecord rec = new IndexRecord();
13        for (int i = 0; i < partitions; ++i) {
14          IFile.Writer<K, V> writer = null;
15          try {
16            long segmentStart = out.getPos();
17             writer = new IFile.Writer<K,V>(job, out, keyClass, valClass, codec,
18                                            spilledRecordsCounter);
19    
20            if (i == partition) {
21              final long recordStart = out.getPos();
22              writer.append(key, value);
23              mapOutputByteCounter.increment(out.getPos() - recordStart);
24            }
25            writer.close();
26    
27            // 把偏移记录在index中
28            rec.startOffset = segmentStart;
29            rec.rawLength = writer.getRawLength();
30            rec.partLength = writer.getCompressedLength();
31            spillRec.putIndex(rec, i);
32    
33            writer = null;
34          } catch (IOException e) {
35            if (null != writer) writer.close();
36            throw e;
37          }
38        }
39         //如果index满了,则把index也写到index文件中。没满则把该条index记录加入到indexCacheList中,并更新totalIndexCacheMemory。
40        if (totalIndexCacheMemory >= INDEX_CACHE_MEMORY_LIMIT) {
41          // create spill index file
42          Path indexFilename = mapOutputFile.getSpillIndexFileForWrite(
43              getTaskID(), numSpills,
44              partitions * MAP_OUTPUT_INDEX_RECORD_LENGTH);
45          spillRec.writeToFile(indexFilename, job);
46        } else {
47          indexCacheList.add(spillRec);
48          totalIndexCacheMemory +=
49            spillRec.size() * MAP_OUTPUT_INDEX_RECORD_LENGTH;
50        }
51        ++numSpills;
52      } finally {
53        if (out != null) out.close();
54      }
55    }

11. MapOutputBuffer的startSpill。

唤醒等待spillReady的线程。

1private synchronized void startSpill() {
2      LOG.info("bufstart = " + bufstart + "; bufend = " + bufmark +
3               "; bufvoid = " + bufvoid);
4      LOG.info("kvstart = " + kvstart + "; kvend = " + kvindex +
5               "; length = " + kvoffsets.length);
6      kvend = kvindex;
7      bufend = bufmark;
8      spillReady.signal();
9    }

12. SpillThread的run方法。

该Thread会检查内存中的输出缓存区,在满足一定条件的时候将缓冲区中的内容spill到硬盘上。这是一个标准的生产者-消费者模型,MapTask的collect方法是生产者,spillThread是消费者,它们之间同步是通过spillLock(ReentrantLock)和spillLock上的两个条件变量(spillDone和spillReady)完成的。当kvstart == kvend条件成立时,表示没有要spill的记录。

 1public void run() {
 2      //临界区
 3        spillLock.lock();
 4        spillThreadRunning = true;
 5        try {
 6          while (true) {
 7            spillDone.signal();
 8当kvstart == kvend条件成立时表示没有要spill的记录
 9            while (kvstart == kvend) {
10              spillReady.await();
11            }
12            try {
13              spillLock.unlock();
14              //执行操作
15              sortAndSpill();
16            } catch (Exception e) {
17              sortSpillException = e;
18            } catch (Throwable t) {
19              sortSpillException = t;
20              String logMsg = "Task " + getTaskID() + " failed : " 
21                              + StringUtils.stringifyException(t);
22              reportFatalError(getTaskID(), t, logMsg);
23            } finally {
24              spillLock.lock();
25              if (bufend < bufindex && bufindex < bufstart) {
26                bufvoid = kvbuffer.length;
27              }
28              kvstart = kvend;
29              bufstart = bufend;
30            }
31          }
32        } catch (InterruptedException e) {
33          Thread.currentThread().interrupt();
34        } finally {
35          spillLock.unlock();
36          spillThreadRunning = false;
37        }
38      }
39    }

13..MapOutputBuffer的sortAndSpill() 方法

SpillThread线程的run方法中调用sortAndSpill把缓存中的输出写到格式为+ “/spill” + spillNumber + “.out”的spill文件中。索引(kvindices)保持在spill{spill号}.out.index中,数据保存在spill{spill号}.out中

创建SpillRecord记录,输出文件和IndexRecord记录,然后,需要在kvoffsets上做排序,排完序后顺序访问kvoffsets,也就是按partition顺序访问记录。按partition循环处理排完序的数组,如果没有combiner,则直接输出记录,否则,调用combineAndSpill,先做combin然后输出。循环的最后记录IndexRecord到SpillRecord。

 1private void sortAndSpill() throws IOException, ClassNotFoundException,
 2                                       InterruptedException {
 3      //approximate the length of the output file to be the length of the
 4      //buffer + header lengths for the partitions
 5      long size = (bufend >= bufstart
 6          ? bufend - bufstart
 7          : (bufvoid - bufend) + bufstart) +
 8                  partitions * APPROX_HEADER_LENGTH;
 9      FSDataOutputStream out = null;
10      try {
11        // 创建溢出文件
12        final SpillRecord spillRec = new SpillRecord(partitions);
13        final Path filename = mapOutputFile.getSpillFileForWrite(getTaskID(),
14            numSpills, size);
15        out = rfs.create(filename);
16
17        final int endPosition = (kvend > kvstart)
18          ? kvend
19          : kvoffsets.length + kvend;
20//使用sorter进行排序, 在内存中进行,参照MapOutputBuffer的compare方法实现的这里的排序也是对序列化的字节做的排序。排序是在kvoffsets上面进行,参照MapOutputBuffer的swap方法实现。
21        sorter.sort(MapOutputBuffer.this, kvstart, endPosition, reporter);
22        int spindex = kvstart;
23        IndexRecord rec = new IndexRecord();
24        InMemValBytes value = new InMemValBytes();
25        for (int i = 0; i < partitions; ++i) {
26          IFile.Writer<K, V> writer = null;
27          try {
28            long segmentStart = out.getPos();
29            writer = new Writer<K, V>(job, out, keyClass, valClass, codec,
30                                      spilledRecordsCounter);
31            if (combinerRunner == null) {
32              // 如果没有combinner则直接写键值
33              DataInputBuffer key = new DataInputBuffer();
34              while (spindex < endPosition &&
35                  kvindices[kvoffsets[spindex % kvoffsets.length]
36                            + PARTITION] == i) {
37                final int kvoff = kvoffsets[spindex % kvoffsets.length];
38                getVBytesForOffset(kvoff, value);
39                key.reset(kvbuffer, kvindices[kvoff + KEYSTART],
40                          (kvindices[kvoff + VALSTART] - 
41                           kvindices[kvoff + KEYSTART]));
42                //键值写到溢出文件
43                writer.append(key, value);
44                ++spindex;
45              }
46            } else {
47              int spstart = spindex;
48              while (spindex < endPosition &&
49                  kvindices[kvoffsets[spindex % kvoffsets.length]
50                            + PARTITION] == i) {
51                ++spindex;
52              }
53              //如果设置了combiner,则调用了combine方法后的结果写到IFile中,writer还是先前的writer。减少溢写到磁盘的数据量。
54              if (spstart != spindex) {
55                combineCollector.setWriter(writer);
56                RawKeyValueIterator kvIter =
57                  new MRResultIterator(spstart, spindex);
58                combinerRunner.combine(kvIter, combineCollector);
59              }
60            }
61
62            // close the writer
63            writer.close();
64    
65            // record offsets
66            rec.startOffset = segmentStart;
67            rec.rawLength = writer.getRawLength();
68            rec.partLength = writer.getCompressedLength();
69            spillRec.putIndex(rec, i);
70    
71            writer = null;
72          } finally {
73            if (null != writer) writer.close();
74          }
75        }
76    
77        if (totalIndexCacheMemory >= INDEX_CACHE_MEMORY_LIMIT) {
78          // 写溢出索引文件,格式如+ "/spill" + spillNumber +  ".out.index"
79          Path indexFilename = mapOutputFile.getSpillIndexFileForWrite(
80              getTaskID(), numSpills,
81              partitions * MAP_OUTPUT_INDEX_RECORD_LENGTH);
82          spillRec.writeToFile(indexFilename, job);
83        } else {
84          indexCacheList.add(spillRec);
85          totalIndexCacheMemory +=
86            spillRec.size() * MAP_OUTPUT_INDEX_RECORD_LENGTH;
87        }
88        LOG.info("Finished spill " + numSpills);
89        ++numSpills;
90      } finally {
91        if (out != null) out.close();
92      }
93    }

14. MapOutputBuffer的compare方法和swap方法

MapOutputBuffer实现了IndexedSortable接口,从接口命名上就可以猜想到,这个排序不是移动数据,而是移动数据的索引。在这里要排序的其实是kvindices对象,通过移动其记录在kvoffets上的索引来实现。

如图,表示了写磁盘前Sort的效果。kvindices保持了记录所属的(Reduce)分区,key在缓冲区开始的位置和value在缓冲区开始的位置,通过kvindices,我们可以在缓冲区中找到对应的记录。kvoffets用于在缓冲区满的时候对kvindices的partition进行排序,排完序的结果将输出到输出到本地磁盘上,其中索引(kvindices)保持在spill{spill号}.out.index中,数据保存在spill{spill号}.out中。通过观察MapOutputBuffer的compare知道,先是在partition上排序,然后是在key上排序。

kvindices在kvoffets上排序

 1public int compare(int i, int j) {
 2      final int ii = kvoffsets[i % kvoffsets.length];
 3      final int ij = kvoffsets[j % kvoffsets.length];
 4      // 先在partition上排序
 5      if (kvindices[ii + PARTITION] != kvindices[ij + PARTITION]) {
 6        return kvindices[ii + PARTITION] - kvindices[ij + PARTITION];
 7      }
 8      // 然后在可以上排序
 9      return comparator.compare(kvbuffer,
10          kvindices[ii + KEYSTART],
11          kvindices[ii + VALSTART] - kvindices[ii + KEYSTART],
12          kvbuffer,
13          kvindices[ij + KEYSTART],
14          kvindices[ij + VALSTART] - kvindices[ij + KEYSTART]);
15    }
16
17     public void swap(int i, int j) {
18      i %= kvoffsets.length;
19      j %= kvoffsets.length;
20      //通过交互在kvoffsets上的索引达到排序效果
21      int tmp = kvoffsets[i];
22      kvoffsets[i] = kvoffsets[j];
23      kvoffsets[j] = tmp;
24    }

15. MapOutputBuffer的flush() 方法

Mapper的结果都已经collect了,需要对缓冲区做一些最后的清理,调用flush方法,合并spill{n}文件产生最后的输出。先等待可能的spill过程完成,然后判断缓冲区是否为空,如果不是,则调用sortAndSpill,做最后的spill,然后结束spill线程.

 1public synchronized void flush() throws IOException, ClassNotFoundException,
 2                                            InterruptedException {
 3      LOG.info("Starting flush of map output");
 4      spillLock.lock();
 5      try {
 6        while (kvstart != kvend) {
 7          reporter.progress();
 8          spillDone.await();
 9        }
10        if (sortSpillException != null) {
11          throw (IOException)new IOException("Spill failed"
12              ).initCause(sortSpillException);
13        }
14        if (kvend != kvindex) {
15          kvend = kvindex;
16          bufend = bufmark;
17          sortAndSpill();
18        }
19      } catch (InterruptedException e) {
20        throw (IOException)new IOException(
21            "Buffer interrupted while waiting for the writer"
22            ).initCause(e);
23      } finally {
24        spillLock.unlock();
25      }
26      assert !spillLock.isHeldByCurrentThread();
27
28      try {
29        spillThread.interrupt();
30        spillThread.join();
31      } catch (InterruptedException e) {
32        throw (IOException)new IOException("Spill failed"
33            ).initCause(e);
34      }
35      // release sort buffer before the merge
36      kvbuffer = null;
37      mergeParts();
38    }

16. MapTask.MapOutputBuffer的mergeParts()方法.

从不同溢写文件中读取出来的,然后再把这些值加起来。因为merge是将多个溢写文件合并到一个文件,所以可能也有相同的key存在,在这个过程中如果配置设置过Combiner,也会使用Combiner来合并相同的key。?mapreduce让每个map只输出一个文件,并且为这个文件提供一个索引文件,以记录每个reduce对应数据的偏移量。

  1private void mergeParts() throws IOException, InterruptedException, 
  2                                     ClassNotFoundException {
  3      // get the approximate size of the final output/index files
  4      long finalOutFileSize = 0;
  5      long finalIndexFileSize = 0;
  6      final Path[] filename = new Path[numSpills];
  7      final TaskAttemptID mapId = getTaskID();
  8
  9      for(int i = 0; i < numSpills; i++) {
 10        filename[i] = mapOutputFile.getSpillFile(mapId, i);
 11        finalOutFileSize += rfs.getFileStatus(filename[i]).getLen();
 12      }
 13      if (numSpills == 1) { //如果只有一个spill文件,则重命名为输出的最终文件
 14        rfs.rename(filename[0],
 15            new Path(filename[0].getParent(), "file.out"));
 16        if (indexCacheList.size() == 0) {
 17          rfs.rename(mapOutputFile.getSpillIndexFile(mapId, 0),
 18              new Path(filename[0].getParent(),"file.out.index"));
 19        } else {
 20          indexCacheList.get(0).writeToFile(
 21                new Path(filename[0].getParent(),"file.out.index"), job);
 22        }
 23        return;
 24      }
 25    
 26      // read in paged indices
 27      for (int i = indexCacheList.size(); i < numSpills; ++i) {
 28        Path indexFileName = mapOutputFile.getSpillIndexFile(mapId, i);
 29        indexCacheList.add(new SpillRecord(indexFileName, job));
 30      }
 31    
 32      //make correction in the length to include the sequence file header
 33      //lengths for each partition
 34      finalOutFileSize += partitions * APPROX_HEADER_LENGTH;
 35      finalIndexFileSize = partitions * MAP_OUTPUT_INDEX_RECORD_LENGTH;
 36      Path finalOutputFile = mapOutputFile.getOutputFileForWrite(mapId,
 37                             finalOutFileSize);
 38      Path finalIndexFile = mapOutputFile.getOutputIndexFileForWrite(
 39                            mapId, finalIndexFileSize);
 40    
 41      //The output stream for the final single output file
 42      FSDataOutputStream finalOut = rfs.create(finalOutputFile, true, 4096);
 43    
 44      if (numSpills == 0) {
 45        //如果没有spill文件,则创建一个 dummy files
 46        IndexRecord rec = new IndexRecord();
 47        SpillRecord sr = new SpillRecord(partitions);
 48        try {
 49          for (int i = 0; i < partitions; i++) {
 50            long segmentStart = finalOut.getPos();
 51            Writer<K, V> writer =
 52              new Writer<K, V>(job, finalOut, keyClass, valClass, codec, null);
 53            writer.close();
 54            rec.startOffset = segmentStart;
 55            rec.rawLength = writer.getRawLength();
 56            rec.partLength = writer.getCompressedLength();
 57            sr.putIndex(rec, i);
 58          }
 59          sr.writeToFile(finalIndexFile, job);
 60        } finally {
 61          finalOut.close();
 62        }
 63        return;
 64      }
 65      {
 66        IndexRecord rec = new IndexRecord();
 67        final SpillRecord spillRec = new SpillRecord(partitions);
 68        for (int parts = 0; parts < partitions; parts++) {
 69          //在循环内对每个分区分别创建segment然后做merge
 70          List<Segment<K,V>> segmentList =
 71            new ArrayList<Segment<K, V>>(numSpills);
 72          for(int i = 0; i < numSpills; i++) {
 73            IndexRecord indexRecord = indexCacheList.get(i).getIndex(parts);
 74    
 75            Segment<K,V> s =
 76              new Segment<K,V>(job, rfs, filename[i], indexRecord.startOffset,
 77                               indexRecord.partLength, codec, true);
 78            segmentList.add(i, s);
 79    
 80            if (LOG.isDebugEnabled()) {
 81              LOG.debug("MapId=" + mapId + " Reducer=" + parts +
 82                  "Spill =" + i + "(" + indexRecord.startOffset + "," +
 83                  indexRecord.rawLength + ", " + indexRecord.partLength + ")");
 84            }
 85          }
 86    
 87          //merge
 88          @SuppressWarnings("unchecked")
 89          RawKeyValueIterator kvIter = Merger.merge(job, rfs,
 90                         keyClass, valClass, codec,
 91                         segmentList, job.getInt("io.sort.factor", 100),
 92                         new Path(mapId.toString()),
 93                         job.getOutputKeyComparator(), reporter,
 94                         null, spilledRecordsCounter);
 95    
 96          //write merged output to disk
 97         //执行merge,并且把merge结果写到"/file.out"的最终输出中去。
 98          long segmentStart = finalOut.getPos();
 99          Writer<K, V> writer =
100              new Writer<K, V>(job, finalOut, keyClass, valClass, codec,
101                               spilledRecordsCounter);
102          if (combinerRunner == null || numSpills < minSpillsForCombine) {
103            Merger.writeFile(kvIter, writer, reporter, job);
104          } else {
105            combineCollector.setWriter(writer);
106            combinerRunner.combine(kvIter, combineCollector);
107          }
108    
109          //close
110          writer.close();
111    
112          // record offsets
113          //把index写到最终的"/file.out.index"文件中。
114          rec.startOffset = segmentStart;
115          rec.rawLength = writer.getRawLength();
116          rec.partLength = writer.getCompressedLength();
117          spillRec.putIndex(rec, parts);
118        }
119        spillRec.writeToFile(finalIndexFile, job);
120        finalOut.close();
121        for(int i = 0; i < numSpills; i++) {
122          rfs.delete(filename[i],true);
123        }
124      }
125    }

合并前后index文件和spill文件的结构图

从前面的分析指导,多个partition的都在一个输出文件中,但是按照partition排序的。即把maper输出按照partition分段了。一个partition对应一个reducer,因此一个reducer只要获取一段即可。

完。

参考:

参考并补充了http://caibinbupt.iteye.com/blog/401374文章中关于内存中索引结构的分析。谢谢。