Flink - NetworkEnvironment
NetworkEnvironment 是一個TaskManager對應一個,而不是一個task對應一個
其中最關鍵的是networkBufferPool,
operator產生的中間結果,ResultPartition,或是input數據,InputGate
都是需要memory來暫存的,這就需要networkBufferPool來管理這部分內存
/** * Network I/O components of each {@link TaskManager} instance. The network environment contains * the data structures that keep track of all intermediate results and all data exchanges. * * When initialized, the NetworkEnvironment will allocate the network buffer pool. * All other components (netty, intermediate result managers, ...) are only created once the * environment is "associated" with a TaskManager and JobManager. This happens as soon as the * TaskManager actor gets created and registers itself at the JobManager. */ public class NetworkEnvironment { private final NetworkEnvironmentConfiguration configuration; private final NetworkBufferPool networkBufferPool; private ConnectionManager connectionManager; private ResultPartitionManager partitionManager; private ResultPartitionConsumableNotifier partitionConsumableNotifier; /** * ExecutionEnvironment which is used to execute remote calls with the * {@link JobManagerResultPartitionConsumableNotifier} */ private final ExecutionContext executionContext; /** * Initializes all network I/O components. */ public NetworkEnvironment( ExecutionContext executionContext, FiniteDuration jobManagerTimeout, NetworkEnvironmentConfiguration config) throws IOException { // create the network buffers - this is the operation most likely to fail upon // mis-configuration, so we do this first try { networkBufferPool = new NetworkBufferPool(config.numNetworkBuffers(), config.networkBufferSize(), config.memoryType()); } catch (Throwable t) { throw new IOException("Cannot allocate network buffer pool: " + t.getMessage(), t); } } }
NetworkBufferPool
先看看networkBufferPool,
首先,它管理了一堆的BufferPool,而不是buffer,因為一個task manager隻有一個networkBufferPool,所以對於每個task,需要分配一個buffer pool
再者,它的內存管理和memory manager一樣的模式,從heap或off-heap申請相應數量的segments放入availableMemorySegments中
可以看到底下黃色部分,就是分配給networkBufferPool的heap
/** * The NetworkBufferPool is a fixed size pool of {@link MemorySegment} instances * for the network stack. * * The NetworkBufferPool creates {@link LocalBufferPool}s from which the individual tasks draw * the buffers for the network data transfer. When new local buffer pools are created, the * NetworkBufferPool dynamically redistributes the buffers between the pools. */ public class NetworkBufferPool implements BufferPoolFactory { private final int totalNumberOfMemorySegments; //該Pool所管理的所有MemorySegment的數量 private final int memorySegmentSize; //memorySegment的大小,size private final Queue<MemorySegment> availableMemorySegments; //可用的MemorySegment隊列 private final Set<LocalBufferPool> managedBufferPools = new HashSet<LocalBufferPool>(); //管理一組LocalBufferPool,每個task需要分配一個 public final Set<LocalBufferPool> allBufferPools = new HashSet<LocalBufferPool>(); private int numTotalRequiredBuffers; /** * Allocates all {@link MemorySegment} instances managed by this pool. */ public NetworkBufferPool(int numberOfSegmentsToAllocate, int segmentSize, MemoryType memoryType) { this.totalNumberOfMemorySegments = numberOfSegmentsToAllocate; this.memorySegmentSize = segmentSize; final long sizeInLong = (long) segmentSize; try { this.availableMemorySegments = new ArrayBlockingQueue<MemorySegment>(numberOfSegmentsToAllocate); //availableMemorySegments按totalNumberOfMemorySegments分配 } catch (OutOfMemoryError err) { } try { if (memoryType == MemoryType.HEAP) { //可以選擇是從heap或off-heap分配 for (int i = 0; i < numberOfSegmentsToAllocate; i++) { byte[] memory = new byte[segmentSize]; availableMemorySegments.add(MemorySegmentFactory.wrapPooledHeapMemory(memory, null)); } } else if (memoryType == MemoryType.OFF_HEAP) { for (int i = 0; i < numberOfSegmentsToAllocate; i++) { ByteBuffer memory = ByteBuffer.allocateDirect(segmentSize); availableMemorySegments.add(MemorySegmentFactory.wrapPooledOffHeapMemory(memory, null)); } } else { throw new IllegalArgumentException("Unknown memory type " + memoryType); } } } public MemorySegment requestMemorySegment() { return availableMemorySegments.poll(); //request就是從availableMemorySegments裏麵取一個 } // This is not safe with regard to destroy calls, but it does not hurt, because destroy happens // only once at clean up time (task manager shutdown). public void recycle(MemorySegment segment) { availableMemorySegments.add(segment); //而回收就是放回availableMemorySegments } @Override public BufferPool createBufferPool(int numRequiredBuffers, boolean isFixedSize) throws IOException { // It is necessary to use a separate lock from the one used for buffer // requests to ensure deadlock freedom for failure cases. synchronized (factoryLock) { // Ensure that the number of required buffers can be satisfied. // With dynamic memory management this should become obsolete. if (numTotalRequiredBuffers + numRequiredBuffers > totalNumberOfMemorySegments) { //確定已經required的加上這次require的沒有超過總量 throw new IOException(String.format("Insufficient number of network buffers: " + "required %d, but only %d available. The total number of network " + "buffers is currently set to %d. You can increase this " + "number by setting the configuration key '%s'.", numRequiredBuffers, totalNumberOfMemorySegments - numTotalRequiredBuffers, totalNumberOfMemorySegments, ConfigConstants.TASK_MANAGER_NETWORK_NUM_BUFFERS_KEY)); } this.numTotalRequiredBuffers += numRequiredBuffers; //增加numTotalRequiredBuffers // We are good to go, create a new buffer pool and redistribute // non-fixed size buffers. LocalBufferPool localBufferPool = new LocalBufferPool(this, numRequiredBuffers); //創建LocalBufferPool,這時並不會把segement給他,request是lazy的 // The fixed size pools get their share of buffers and don't change // it during their lifetime. if (!isFixedSize) { //如果不是Fixed,可以動態把多的segment分配出去 managedBufferPools.add(localBufferPool); } allBufferPools.add(localBufferPool); //管理localBufferPool redistributeBuffers(); return localBufferPool; } } // Must be called from synchronized block //目的就是把多餘的segement也分配出去,利用起來 private void redistributeBuffers() throws IOException { int numManagedBufferPools = managedBufferPools.size(); if (numManagedBufferPools == 0) { return; // necessary to avoid div by zero when no managed pools } // All buffers, which are not among the required ones int numAvailableMemorySegment = totalNumberOfMemorySegments - numTotalRequiredBuffers; //多的Segments // Available excess (not required) buffers per pool int numExcessBuffersPerPool = numAvailableMemorySegment / numManagedBufferPools; //多的平均到每個bufferpool // Distribute leftover buffers in round robin fashion int numLeftoverBuffers = numAvailableMemorySegment % numManagedBufferPools; //餘數 int bufferPoolIndex = 0; for (LocalBufferPool bufferPool : managedBufferPools) { int leftoverBuffers = bufferPoolIndex++ < numLeftoverBuffers ? 1 : 0; //餘數可能是1或0 bufferPool.setNumBuffers(bufferPool.getNumberOfRequiredMemorySegments() + numExcessBuffersPerPool + leftoverBuffers); //在getNumberOfRequiredMemorySegments的基礎上加上多餘的 } }
可看到,當一個task需要申請buffer pool時,要先createBufferPool
即,在從availableMemorySegments中取出相應數量的segement,封裝成LocalBufferPool,返回
這裏有個managedBufferPools,表示bufferpool的size是可以動態變化的,
redistributeBuffers會平均將現有可用的segments分配到所有當前的managedBufferPools上去
LocalBufferPool
class LocalBufferPool implements BufferPool { private final NetworkBufferPool networkBufferPool; //總的bufferPool // The minimum number of required segments for this pool private final int numberOfRequiredMemorySegments; //要求申請的MemorySegments的個數,最小個數 // The current size of this pool private int currentPoolSize; //實際的MemorySegments的個數,如果不是fixed,可能會多 // The currently available memory segments. These are segments, which have been requested from // the network buffer pool and are currently not handed out as Buffer instances. private final Queue<MemorySegment> availableMemorySegments = new ArrayDeque<MemorySegment>(); //緩存MemorySegment的隊列 // Buffer availability listeners, which need to be notified when a Buffer becomes available. // Listeners can only be registered at a time/state where no Buffer instance was available. private final Queue<EventListener<Buffer>> registeredListeners = new ArrayDeque<EventListener<Buffer>>(); // Number of all memory segments, which have been requested from the network buffer pool and are // somehow referenced through this pool (e.g. wrapped in Buffer instances or as available segments). private int numberOfRequestedMemorySegments; //已經分配的MemorySegments的個數 private boolean isDestroyed; private BufferPoolOwner owner; //owner複雜去釋放networkBufferPool的buffer LocalBufferPool(NetworkBufferPool networkBufferPool, int numberOfRequiredMemorySegments) { this.networkBufferPool = networkBufferPool; this.numberOfRequiredMemorySegments = numberOfRequiredMemorySegments; //初始化的時候,numberOfRequiredMemorySegments,currentPoolSize相等 this.currentPoolSize = numberOfRequiredMemorySegments; } @Override public int getMemorySegmentSize() { return networkBufferPool.getMemorySegmentSize(); //MemorySegment本身的size } @Override public int getNumBuffers() { synchronized (availableMemorySegments) { return currentPoolSize; //當前local pool的size } } private Buffer requestBuffer(boolean isBlocking) throws InterruptedException, IOException { synchronized (availableMemorySegments) { returnExcessMemorySegments(); //把多申請的MemorySegment還回去,如果動態的情況下,是可能的 boolean askToRecycle = owner != null; while (availableMemorySegments.isEmpty()) { //如果availableMemorySegments中沒有現成的 if (numberOfRequestedMemorySegments < currentPoolSize) { //隻有在numberOfRequestedMemorySegments小於currentPoolSize,才能繼續申請 final MemorySegment segment = networkBufferPool.requestMemorySegment(); //從networkBufferPool中申請一塊 if (segment != null) { numberOfRequestedMemorySegments++; availableMemorySegments.add(segment); continue; //如果申請到繼續 } } if (askToRecycle) { //如果申請不到,說明networkBufferPool也沒有buffer了 owner.releaseMemory(1); //試圖讓owner去讓networkBufferPool釋放一塊 } if (isBlocking) { availableMemorySegments.wait(2000); } else { return null; } } return new Buffer(availableMemorySegments.poll(), this); } } @Override public void recycle(MemorySegment segment) { synchronized (availableMemorySegments) { if (isDestroyed || numberOfRequestedMemorySegments > currentPoolSize) { returnMemorySegment(segment); //直接還回networkBufferPool } else { EventListener<Buffer> listener = registeredListeners.poll(); if (listener == null) { //如果沒有listen,直接把segment放回availableMemorySegments availableMemorySegments.add(segment); availableMemorySegments.notify(); //觸發通知availableMemorySegments有新的segment } else { try { listener.onEvent(new Buffer(segment, this)); //如果有listener,觸發onEvent讓listener去處理這個segment } catch (Throwable ignored) { availableMemorySegments.add(segment); availableMemorySegments.notify(); } } } } } @Override public void setNumBuffers(int numBuffers) throws IOException { synchronized (availableMemorySegments) { checkArgument(numBuffers >= numberOfRequiredMemorySegments, "Buffer pool needs at least " + numberOfRequiredMemorySegments + " buffers, but tried to set to " + numBuffers + "."); currentPoolSize = numBuffers; returnExcessMemorySegments(); // If there is a registered owner and we have still requested more buffers than our // size, trigger a recycle via the owner. if (owner != null && numberOfRequestedMemorySegments > currentPoolSize) { owner.releaseMemory(numberOfRequestedMemorySegments - numBuffers); } } } }
associateWithTaskManagerAndJobManager
NetworkEnvironment首先需要做的是associate,然後才能用
NetworkEnvironment 中有很多組件,是需要在綁定TaskManagerAndJobManager時,才需要去初始化的
/** * This associates the network environment with a TaskManager and JobManager. * This will actually start the network components. * * @param jobManagerGateway Gateway to the JobManager. * @param taskManagerGateway Gateway to the TaskManager. * * @throws IOException Thrown if the network subsystem (Netty) cannot be properly started. */ public void associateWithTaskManagerAndJobManager( ActorGateway jobManagerGateway, ActorGateway taskManagerGateway) throws IOException { synchronized (lock) { if (this.partitionConsumableNotifier == null && this.partitionManager == null && this.taskEventDispatcher == null && this.connectionManager == null) { // good, not currently associated. start the individual components LOG.debug("Starting result partition manager and network connection manager"); this.partitionManager = new ResultPartitionManager(); this.taskEventDispatcher = new TaskEventDispatcher(); this.partitionConsumableNotifier = new JobManagerResultPartitionConsumableNotifier( executionContext, jobManagerGateway, taskManagerGateway, jobManagerTimeout); this.partitionStateChecker = new JobManagerPartitionStateChecker( jobManagerGateway, taskManagerGateway); // ----- Network connections ----- final Option<NettyConfig> nettyConfig = configuration.nettyConfig(); connectionManager = nettyConfig.isDefined() ? new NettyConnectionManager(nettyConfig.get()) : new LocalConnectionManager(); try { LOG.debug("Starting network connection manager"); connectionManager.start(partitionManager, taskEventDispatcher, networkBufferPool); } catch (Throwable t) { throw new IOException("Failed to instantiate network connection manager: " + t.getMessage(), t); } } else { throw new IllegalStateException( "Network Environment is already associated with a JobManager/TaskManager"); } } }
主要是初始化一係列組件,TaskEventDispatcher,ConnectionManager, ResultPartitionManager
JobManagerResultPartitionConsumableNotifier, JobManagerPartitionStateChecker
對於ConnectionManager,這裏如果定義了netty,會創建NettyConnectionManager
這裏麵,主要是初始化Netty client和Netty server
否則是創建LocalConnectionManager
而對於ResultPartitionManager, 主要就是用於track所有的result partitions,
核心結構為, Table<ExecutionAttemptID, IntermediateResultPartitionID, ResultPartition> registeredPartitions =HashBasedTable.create();
這個會記錄所有的ResultPartition
/** * The result partition manager keeps track of all currently produced/consumed partitions of a * task manager. */ public class ResultPartitionManager implements ResultPartitionProvider { private static final Logger LOG = LoggerFactory.getLogger(ResultPartitionManager.class); public final Table<ExecutionAttemptID, IntermediateResultPartitionID, ResultPartition> registeredPartitions = HashBasedTable.create(); private boolean isShutdown; public void registerResultPartition(ResultPartition partition) throws IOException { synchronized (registeredPartitions) { checkState(!isShutdown, "Result partition manager already shut down."); ResultPartitionID partitionId = partition.getPartitionId(); ResultPartition previous = registeredPartitions.put(partitionId.getProducerId(), partitionId.getPartitionId(), partition); } } }
JobManagerResultPartitionConsumableNotifier,比較關鍵,通知JobMananger,ResultPartition已經ready,可以開始consume
private static class JobManagerResultPartitionConsumableNotifier implements ResultPartitionConsumableNotifier { /** * {@link ExecutionContext} which is used for the failure handler of {@link ScheduleOrUpdateConsumers} * messages. */ private final ExecutionContext executionContext; private final ActorGateway jobManager; private final ActorGateway taskManager; private final FiniteDuration jobManagerMessageTimeout; @Override public void notifyPartitionConsumable(JobID jobId, final ResultPartitionID partitionId) { final ScheduleOrUpdateConsumers msg = new ScheduleOrUpdateConsumers(jobId, partitionId); //通知jobmanager,去deployconsumer Future<Object> futureResponse = jobManager.ask(msg, jobManagerMessageTimeout); //等JobManager的回複 futureResponse.onFailure(new OnFailure() { //失敗,即無法deploy consumer @Override public void onFailure(Throwable failure) { LOG.error("Could not schedule or update consumers at the JobManager.", failure); // Fail task at the TaskManager FailTask failMsg = new FailTask( partitionId.getProducerId(), new RuntimeException("Could not notify JobManager to schedule or update consumers", failure)); taskManager.tell(failMsg); } }, executionContext); } }
RegisterTask
在NetworkEnvironment中比較重要的操作,是注冊task,需要為task的resultpartition和inputgate分配bufferpool
public void registerTask(Task task) throws IOException { final ResultPartition[] producedPartitions = task.getProducedPartitions(); final ResultPartitionWriter[] writers = task.getAllWriters(); ResultPartitionConsumableNotifier jobManagerNotifier; synchronized (lock) { for (int i = 0; i < producedPartitions.length; i++) { final ResultPartition partition = producedPartitions[i]; final ResultPartitionWriter writer = writers[i]; // Buffer pool for the partition BufferPool bufferPool = null; try { bufferPool = networkBufferPool.createBufferPool(partition.getNumberOfSubpartitions(), false); //創建LocalPool,注意Reqired的segment數目是Subpartitions的數目,即一個subP一個segment partition.registerBufferPool(bufferPool); //把localPool注冊到ResultPartition partitionManager.registerResultPartition(partition); //注冊到partitionManager } // Register writer with task event dispatcher taskEventDispatcher.registerWriterForIncomingTaskEvents(writer.getPartitionId(), writer); } // Setup the buffer pool for each buffer reader final SingleInputGate[] inputGates = task.getAllInputGates(); for (SingleInputGate gate : inputGates) { BufferPool bufferPool = null; try { bufferPool = networkBufferPool.createBufferPool(gate.getNumberOfInputChannels(), false); gate.setBufferPool(bufferPool); } // Copy the reference to prevent races with concurrent shut downs jobManagerNotifier = partitionConsumableNotifier; } for (ResultPartition partition : producedPartitions) { // Eagerly notify consumers if required. if (partition.getEagerlyDeployConsumers()) { //如果是eager的方式,通知jobmanager,可以deploy consumer了 jobManagerNotifier.notifyPartitionConsumable( partition.getJobId(), partition.getPartitionId()); } } }
最後更新:2017-04-07 21:23:50