64
技術社區[雲棲]
OkHttp 3.7源碼分析(四)——緩存策略
OkHttp3.7源碼分析文章列表如下:
合理地利用本地緩存可以有效地減少網絡開銷,減少響應延遲。HTTP報頭也定義了很多與緩存有關的域來控製緩存。今天就來講講OkHttp中關於緩存部分的實現細節。
1. HTTP緩存策略
首先來了解下HTTP協議中緩存部分的相關域。
1.1 Expires
超時時間,一般用在服務器的response報頭中用於告知客戶端對應資源的過期時間。當客戶端需要再次請求相同資源時先比較其過期時間,如果尚未超過過期時間則直接返回緩存結果,如果已經超過則重新請求。
1.2 Cache-Control
相對值,單位時秒,表示當前資源的有效期。Cache-Control
比Expires
優先級更高:
Cache-Control:max-age=31536000,public
1.3 條件GET請求
1.3.1 Last-Modified-Date
客戶端第一次請求時,服務器返回:
Last-Modified: Tue, 12 Jan 2016 09:31:27 GMT
當客戶端二次請求時,可以頭部加上如下header:
If-Modified-Since: Tue, 12 Jan 2016 09:31:27 GMT
如果當前資源沒有被二次修改,服務器返回304告知客戶端直接複用本地緩存。
1.3.2 ETag
ETag是對資源文件的一種摘要,可以通過ETag值來判斷文件是否有修改。當客戶端第一次請求某資源時,服務器返回:
ETag: "5694c7ef-24dc"
客戶端再次請求時,可在頭部加上如下域:
If-None-Match: "5694c7ef-24dc"
如果文件並未改變,則服務器返回304告知客戶端可以複用本地緩存。
1.4 no-cache/no-store
不使用緩存
1.5 only-if-cached
隻使用緩存
2. Cache源碼分析
OkHttp的緩存工作都是在CacheInterceptor
中完成的,Cache部分有如下幾個關鍵類:
-
Cache:Cache管理器,其內部包含一個DiskLruCache將cache寫入文件係統:
* <h3>Cache Optimization</h3> * * <p>To measure cache effectiveness, this class tracks three statistics: * <ul> * <li><strong>{@linkplain #requestCount() Request Count:}</strong> the number of HTTP * requests issued since this cache was created. * <li><strong>{@linkplain #networkCount() Network Count:}</strong> the number of those * requests that required network use. * <li><strong>{@linkplain #hitCount() Hit Count:}</strong> the number of those requests * whose responses were served by the cache. * </ul> * * Sometimes a request will result in a conditional cache hit. If the cache contains a stale copy of * the response, the client will issue a conditional {@code GET}. The server will then send either * the updated response if it has changed, or a short 'not modified' response if the client's copy * is still valid. Such responses increment both the network count and hit count. * * <p>The best way to improve the cache hit rate is by configuring the web server to return * cacheable responses. Although this client honors all <a * href="https://tools.ietf.org/html/rfc7234">HTTP/1.1 (RFC 7234)</a> cache headers, it doesn't cache * partial responses.
Cache內部通過
requestCount
,networkCount
,hitCount
三個統計指標來優化緩存效率 -
CacheStrategy:緩存策略。其內部維護一個request和response,通過指定request和response來描述是通過網絡還是緩存獲取response,抑或二者同時使用
[CacheStrategy.java] /** * Given a request and cached response, this figures out whether to use the network, the cache, or * both. * * <p>Selecting a cache strategy may add conditions to the request (like the "If-Modified-Since" * header for conditional GETs) or warnings to the cached response (if the cached data is * potentially stale). */ public final class CacheStrategy { /** The request to send on the network, or null if this call doesn't use the network. */ public final Request networkRequest; /** The cached response to return or validate; or null if this call doesn't use a cache. */ public final Response cacheResponse; ...... }
CacheStrategy$Factory:緩存策略工廠類根據實際請求返回對應的緩存策略
既然實際的緩存工作都是在CacheInterceptor
中完成的,那麼接下來看下CahceInterceptor
的核心方法intercept
方法源碼:
[CacheInterceptor.java]
@Override public Response intercept(Chain chain) throws IOException {
//首先嚐試獲取緩存
Response cacheCandidate = cache != null
? cache.get(chain.request())
: null;
long now = System.currentTimeMillis();
//獲取緩存策略
CacheStrategy strategy = new CacheStrategy.Factory(now, chain.request(), cacheCandidate).get();
Request networkRequest = strategy.networkRequest;
Response cacheResponse = strategy.cacheResponse;
//如果有緩存,更新下相關統計指標:命中率
if (cache != null) {
cache.trackResponse(strategy);
}
//如果當前緩存不符合要求,將其close
if (cacheCandidate != null && cacheResponse == null) {
closeQuietly(cacheCandidate.body()); // The cache candidate wasn't applicable. Close it.
}
// 如果不能使用網絡,同時又沒有符合條件的緩存,直接拋504錯誤
if (networkRequest == null && cacheResponse == null) {
return new Response.Builder()
.request(chain.request())
.protocol(Protocol.HTTP_1_1)
.code(504)
.message("Unsatisfiable Request (only-if-cached)")
.body(Util.EMPTY_RESPONSE)
.sentRequestAtMillis(-1L)
.receivedResponseAtMillis(System.currentTimeMillis())
.build();
}
// 如果有緩存同時又不使用網絡,則直接返回緩存結果
if (networkRequest == null) {
return cacheResponse.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.build();
}
//嚐試通過網絡獲取回複
Response networkResponse = null;
try {
networkResponse = chain.proceed(networkRequest);
} finally {
// If we're crashing on I/O or otherwise, don't leak the cache body.
if (networkResponse == null && cacheCandidate != null) {
closeQuietly(cacheCandidate.body());
}
}
// 如果既有緩存,同時又發起了請求,說明此時是一個Conditional Get請求
if (cacheResponse != null) {
// 如果服務端返回的是NOT_MODIFIED,緩存有效,將本地緩存和網絡響應做合並
if (networkResponse.code() == HTTP_NOT_MODIFIED) {
Response response = cacheResponse.newBuilder()
.headers(combine(cacheResponse.headers(), networkResponse.headers()))
.sentRequestAtMillis(networkResponse.sentRequestAtMillis())
.receivedResponseAtMillis(networkResponse.receivedResponseAtMillis())
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
networkResponse.body().close();
// Update the cache after combining headers but before stripping the
// Content-Encoding header (as performed by initContentStream()).
cache.trackConditionalCacheHit();
cache.update(cacheResponse, response);
return response;
} else {// 如果響應資源有更新,關掉原有緩存
closeQuietly(cacheResponse.body());
}
}
Response response = networkResponse.newBuilder()
.cacheResponse(stripBody(cacheResponse))
.networkResponse(stripBody(networkResponse))
.build();
if (cache != null) {
if (HttpHeaders.hasBody(response) && CacheStrategy.isCacheable(response, networkRequest)) {
// 將網絡響應寫入cache中
CacheRequest cacheRequest = cache.put(response);
return cacheWritingResponse(cacheRequest, response);
}
if (HttpMethod.invalidatesCache(networkRequest.method())) {
try {
cache.remove(networkRequest);
} catch (IOException ignored) {
// The cache cannot be written.
}
}
}
return response;
}
核心邏輯都以中文注釋的形式在代碼中標注出來了,大家看代碼即可。通過上麵的代碼可以看出,幾乎所有的動作都是以CacheStrategy緩存策略為依據做出的,那麼接下來看下緩存策略是如何生成的,相關代碼實現在CacheStrategy$Factory.get()
方法中:
[CacheStrategy$Factory]
/**
* Returns a strategy to satisfy {@code request} using the a cached response {@code response}.
*/
public CacheStrategy get() {
CacheStrategy candidate = getCandidate();
if (candidate.networkRequest != null && request.cacheControl().onlyIfCached()) {
// We're forbidden from using the network and the cache is insufficient.
return new CacheStrategy(null, null);
}
return candidate;
}
/** Returns a strategy to use assuming the request can use the network. */
private CacheStrategy getCandidate() {
// 若本地沒有緩存,發起網絡請求
if (cacheResponse == null) {
return new CacheStrategy(request, null);
}
// 如果當前請求是HTTPS,而緩存沒有TLS握手,重新發起網絡請求
if (request.isHttps() && cacheResponse.handshake() == null) {
return new CacheStrategy(request, null);
}
// If this response shouldn't have been stored, it should never be used
// as a response source. This check should be redundant as long as the
// persistence store is well-behaved and the rules are constant.
if (!isCacheable(cacheResponse, request)) {
return new CacheStrategy(request, null);
}
//如果當前的緩存策略是不緩存或者是conditional get,發起網絡請求
CacheControl requestCaching = request.cacheControl();
if (requestCaching.noCache() || hasConditions(request)) {
return new CacheStrategy(request, null);
}
//ageMillis:緩存age
long ageMillis = cacheResponseAge();
//freshMillis:緩存保鮮時間
long freshMillis = computeFreshnessLifetime();
if (requestCaching.maxAgeSeconds() != -1) {
freshMillis = Math.min(freshMillis, SECONDS.toMillis(requestCaching.maxAgeSeconds()));
}
long minFreshMillis = 0;
if (requestCaching.minFreshSeconds() != -1) {
minFreshMillis = SECONDS.toMillis(requestCaching.minFreshSeconds());
}
long maxStaleMillis = 0;
CacheControl responseCaching = cacheResponse.cacheControl();
if (!responseCaching.mustRevalidate() && requestCaching.maxStaleSeconds() != -1) {
maxStaleMillis = SECONDS.toMillis(requestCaching.maxStaleSeconds());
}
//如果 age + min-fresh >= max-age && age + min-fresh < max-age + max-stale,則雖然緩存過期了, //但是緩存繼續可以使用,隻是在頭部添加 110 警告碼
if (!responseCaching.noCache() && ageMillis + minFreshMillis < freshMillis + maxStaleMillis) {
Response.Builder builder = cacheResponse.newBuilder();
if (ageMillis + minFreshMillis >= freshMillis) {
builder.addHeader("Warning", "110 HttpURLConnection \"Response is stale\"");
}
long oneDayMillis = 24 * 60 * 60 * 1000L;
if (ageMillis > oneDayMillis && isFreshnessLifetimeHeuristic()) {
builder.addHeader("Warning", "113 HttpURLConnection \"Heuristic expiration\"");
}
return new CacheStrategy(null, builder.build());
}
// 發起conditional get請求
String conditionName;
String conditionValue;
if (etag != null) {
conditionName = "If-None-Match";
conditionValue = etag;
} else if (lastModified != null) {
conditionName = "If-Modified-Since";
conditionValue = lastModifiedString;
} else if (servedDate != null) {
conditionName = "If-Modified-Since";
conditionValue = servedDateString;
} else {
return new CacheStrategy(request, null); // No condition! Make a regular request.
}
Headers.Builder conditionalRequestHeaders = request.headers().newBuilder();
Internal.instance.addLenient(conditionalRequestHeaders, conditionName, conditionValue);
Request conditionalRequest = request.newBuilder()
.headers(conditionalRequestHeaders.build())
.build();
return new CacheStrategy(conditionalRequest, cacheResponse);
}
可以看到其核心邏輯在getCandidate函數中。基本就是HTTP緩存協議的實現,核心代碼邏輯已通過中文注釋說明,大家直接看代碼就好。
3. DiskLruCache
Cache內部通過DiskLruCache管理cache在文件係統層麵的創建,讀取,清理等等工作,接下來看下DiskLruCache的主要邏輯:
public final class DiskLruCache implements Closeable, Flushable {
final FileSystem fileSystem;
final File directory;
private final File journalFile;
private final File journalFileTmp;
private final File journalFileBackup;
private final int appVersion;
private long maxSize;
final int valueCount;
private long size = 0;
BufferedSink journalWriter;
final LinkedHashMap<String, Entry> lruEntries = new LinkedHashMap<>(0, 0.75f, true);
// Must be read and written when synchronized on 'this'.
boolean initialized;
boolean closed;
boolean mostRecentTrimFailed;
boolean mostRecentRebuildFailed;
/**
* To differentiate between old and current snapshots, each entry is given a sequence number each
* time an edit is committed. A snapshot is stale if its sequence number is not equal to its
* entry's sequence number.
*/
private long nextSequenceNumber = 0;
/** Used to run 'cleanupRunnable' for journal rebuilds. */
private final Executor executor;
private final Runnable cleanupRunnable = new Runnable() {
public void run() {
......
}
};
...
}
3.1 journalFile
DiskLruCache內部日誌文件,對cache的每一次讀寫都對應一條日誌記錄,DiskLruCache通過分析日誌分析和創建cache。日誌文件格式如下:
libcore.io.DiskLruCache
1
100
2
CLEAN 3400330d1dfc7f3f7f4b8d4d803dfcf6 832 21054
DIRTY 335c4c6028171cfddfbaae1a9c313c52
CLEAN 335c4c6028171cfddfbaae1a9c313c52 3934 2342
REMOVE 335c4c6028171cfddfbaae1a9c313c52
DIRTY 1ab96a171faeeee38496d8b330771a7a
CLEAN 1ab96a171faeeee38496d8b330771a7a 1600 234
READ 335c4c6028171cfddfbaae1a9c313c52
READ 3400330d1dfc7f3f7f4b8d4d803dfcf6
前5行固定不變,分別為:常量:libcore.io.DiskLruCache;diskCache版本;應用程序版本;valueCount(後文介紹),空行
接下來每一行對應一個cache entry的一次狀態記錄,其格式為:[狀態(DIRTY,CLEAN,READ,REMOVE),key,狀態相關value(可選)]:
- DIRTY:表明一個cache entry正在被創建或更新,每一個成功的DIRTY記錄都應該對應一個CLEAN或REMOVE操作。如果一個DIRTY缺少預期匹配的CLEAN/REMOVE,則對應entry操作失敗,需要將其從lruEntries中刪除
- CLEAN:說明cache已經被成功操作,當前可以被正常讀取。每一個CLEAN行還需要記錄其每一個value的長度
- READ: 記錄一次cache讀取操作
- REMOVE:記錄一次cache清除
日誌文件的應用場景主要有四個:
- DiskCacheLru初始化時通過讀取日誌文件創建cache容器:lruEntries。同時通過日誌過濾操作不成功的cache項。相關邏輯在DiskLruCache.readJournalLine,DiskLruCache.processJournal
- 初始化完成後,為避免日誌文件不斷膨脹,對日誌進行重建精簡,具體邏輯在DiskLruCache.rebuildJournal
- 每當有cache操作時將其記錄入日誌文件中以備下次初始化時使用
- 當冗餘日誌過多時,通過調用cleanUpRunnable線程重建日誌
3.2 DiskLruCache.Entry
每一個DiskLruCache.Entry對應一個cache記錄:
private final class Entry {
final String key;
/** Lengths of this entry's files. */
final long[] lengths;
final File[] cleanFiles;
final File[] dirtyFiles;
/** True if this entry has ever been published. */
boolean readable;
/** The ongoing edit or null if this entry is not being edited. */
Editor currentEditor;
/** The sequence number of the most recently committed edit to this entry. */
long sequenceNumber;
Entry(String key) {
this.key = key;
lengths = new long[valueCount];
cleanFiles = new File[valueCount];
dirtyFiles = new File[valueCount];
// The names are repetitive so re-use the same builder to avoid allocations.
StringBuilder fileBuilder = new StringBuilder(key).append('.');
int truncateTo = fileBuilder.length();
for (int i = 0; i < valueCount; i++) {
fileBuilder.append(i);
cleanFiles[i] = new File(directory, fileBuilder.toString());
fileBuilder.append(".tmp");
dirtyFiles[i] = new File(directory, fileBuilder.toString());
fileBuilder.setLength(truncateTo);
}
}
...
/**
* Returns a snapshot of this entry. This opens all streams eagerly to guarantee that we see a
* single published snapshot. If we opened streams lazily then the streams could come from
* different edits.
*/
Snapshot snapshot() {
if (!Thread.holdsLock(DiskLruCache.this)) throw new AssertionError();
Source[] sources = new Source[valueCount];
long[] lengths = this.lengths.clone(); // Defensive copy since these can be zeroed out.
try {
for (int i = 0; i < valueCount; i++) {
sources[i] = fileSystem.source(cleanFiles[i]);
}
return new Snapshot(key, sequenceNumber, sources, lengths);
} catch (FileNotFoundException e) {
// A file must have been deleted manually!
for (int i = 0; i < valueCount; i++) {
if (sources[i] != null) {
Util.closeQuietly(sources[i]);
} else {
break;
}
}
// Since the entry is no longer valid, remove it so the metadata is accurate (i.e. the cache
// size.)
try {
removeEntry(this);
} catch (IOException ignored) {
}
return null;
}
}
}
一個Entry主要由以下幾部分構成:
- key:每個cache都有一個key作為其標識符。當前cache的key為其對應URL的MD5字符串
- cleanFiles/dirtyFiles:每一個Entry對應多個文件,其對應的文件數由DiskLruCache.valueCount指定。當前在OkHttp中valueCount為2。即每個cache對應2個cleanFiles,2個dirtyFiles。其中第一個cleanFiles/dirtyFiles記錄cache的meta數據(如URL,創建時間,SSL握手記錄等等),第二個文件記錄cache的真正內容。cleanFiles記錄處於穩定狀態的cache結果,dirtyFiles記錄處於創建或更新狀態的cache
- currentEditor:entry編輯器,對entry的所有操作都是通過其編輯器完成。編輯器內部添加了同步鎖
3.3 cleanupRunnable
清理線程,用於重建精簡日誌:
private final Runnable cleanupRunnable = new Runnable() {
public void run() {
synchronized (DiskLruCache.this) {
if (!initialized | closed) {
return; // Nothing to do
}
try {
trimToSize();
} catch (IOException ignored) {
mostRecentTrimFailed = true;
}
try {
if (journalRebuildRequired()) {
rebuildJournal();
redundantOpCount = 0;
}
} catch (IOException e) {
mostRecentRebuildFailed = true;
journalWriter = Okio.buffer(Okio.blackhole());
}
}
}
};
其觸發條件在journalRebuildRequired()方法中:
/**
* We only rebuild the journal when it will halve the size of the journal and eliminate at least
* 2000 ops.
*/
boolean journalRebuildRequired() {
final int redundantOpCompactThreshold = 2000;
return redundantOpCount >= redundantOpCompactThreshold
&& redundantOpCount >= lruEntries.size();
}
當冗餘日誌超過日誌文件本身的一般且總條數超過2000時執行
3.4 SnapShot
cache快照,記錄了特定cache在某一個特定時刻的內容。每次向DiskLruCache請求時返回的都是目標cache的一個快照,相關邏輯在DiskLruCache.get中:
[DiskLruCache.java]
/**
* Returns a snapshot of the entry named {@code key}, or null if it doesn't exist is not currently
* readable. If a value is returned, it is moved to the head of the LRU queue.
*/
public synchronized Snapshot get(String key) throws IOException {
initialize();
checkNotClosed();
validateKey(key);
Entry entry = lruEntries.get(key);
if (entry == null || !entry.readable) return null;
Snapshot snapshot = entry.snapshot();
if (snapshot == null) return null;
redundantOpCount++;
//日誌記錄
journalWriter.writeUtf8(READ).writeByte(' ').writeUtf8(key).writeByte('\n');
if (journalRebuildRequired()) {
executor.execute(cleanupRunnable);
}
return snapshot;
}
3.5 lruEntries
管理cache entry的容器,其數據結構是LinkedHashMap。通過LinkedHashMap本身的實現邏輯達到cache的LRU替換
3.6 FileSystem
使用Okio對File的封裝,簡化了I/O操作。
3.7 DiskLruCache.edit
DiskLruCache可以看成是Cache在文件係統層的具體實現,所以其基本操作接口存在一一對應的關係:
- Cache.get() —>DiskLruCache.get()
- Cache.put()—>DiskLruCache.edit() //cache插入
- Cache.remove()—>DiskLruCache.remove()
- Cache.update()—>DiskLruCache.edit()//cache更新
其中get操作在3.4已經介紹了,remove操作較為簡單,put和update大致邏輯相似,因為篇幅限製,這裏僅介紹Cache.put操作的邏輯,其他的操作大家看代碼就好:
[okhttp3.Cache.java]
CacheRequest put(Response response) {
String requestMethod = response.request().method();
if (HttpMethod.invalidatesCache(response.request().method())) {
try {
remove(response.request());
} catch (IOException ignored) {
// The cache cannot be written.
}
return null;
}
if (!requestMethod.equals("GET")) {
// Don't cache non-GET responses. We're technically allowed to cache
// HEAD requests and some POST requests, but the complexity of doing
// so is high and the benefit is low.
return null;
}
if (HttpHeaders.hasVaryAll(response)) {
return null;
}
Entry entry = new Entry(response);
DiskLruCache.Editor editor = null;
try {
editor = cache.edit(key(response.request().url()));
if (editor == null) {
return null;
}
entry.writeTo(editor);
return new CacheRequestImpl(editor);
} catch (IOException e) {
abortQuietly(editor);
return null;
}
}
可以看到核心邏輯在editor = cache.edit(key(response.request().url()));
,相關代碼在DiskLruCache.edit:
[okhttp3.internal.cache.DiskLruCache.java]
synchronized Editor edit(String key, long expectedSequenceNumber) throws IOException {
initialize();
checkNotClosed();
validateKey(key);
Entry entry = lruEntries.get(key);
if (expectedSequenceNumber != ANY_SEQUENCE_NUMBER && (entry == null
|| entry.sequenceNumber != expectedSequenceNumber)) {
return null; // Snapshot is stale.
}
if (entry != null && entry.currentEditor != null) {
return null; // 當前cache entry正在被其他對象操作
}
if (mostRecentTrimFailed || mostRecentRebuildFailed) {
// The OS has become our enemy! If the trim job failed, it means we are storing more data than
// requested by the user. Do not allow edits so we do not go over that limit any further. If
// the journal rebuild failed, the journal writer will not be active, meaning we will not be
// able to record the edit, causing file leaks. In both cases, we want to retry the clean up
// so we can get out of this state!
executor.execute(cleanupRunnable);
return null;
}
// 日誌接入DIRTY記錄
journalWriter.writeUtf8(DIRTY).writeByte(' ').writeUtf8(key).writeByte('\n');
journalWriter.flush();
if (hasJournalErrors) {
return null; // Don't edit; the journal can't be written.
}
if (entry == null) {
entry = new Entry(key);
lruEntries.put(key, entry);
}
Editor editor = new Editor(entry);
entry.currentEditor = editor;
return editor;
}
edit方法返回對應CacheEntry的editor編輯器。接下來再來看下Cache.put()
方法的entry.writeTo(editor);
,其相關邏輯:
[okhttp3.internal.cache.DiskLruCache.java]
public void writeTo(DiskLruCache.Editor editor) throws IOException {
BufferedSink sink = Okio.buffer(editor.newSink(ENTRY_METADATA));
sink.writeUtf8(url)
.writeByte('\n');
sink.writeUtf8(requestMethod)
.writeByte('\n');
sink.writeDecimalLong(varyHeaders.size())
.writeByte('\n');
for (int i = 0, size = varyHeaders.size(); i < size; i++) {
sink.writeUtf8(varyHeaders.name(i))
.writeUtf8(": ")
.writeUtf8(varyHeaders.value(i))
.writeByte('\n');
}
sink.writeUtf8(new StatusLine(protocol, code, message).toString())
.writeByte('\n');
sink.writeDecimalLong(responseHeaders.size() + 2)
.writeByte('\n');
for (int i = 0, size = responseHeaders.size(); i < size; i++) {
sink.writeUtf8(responseHeaders.name(i))
.writeUtf8(": ")
.writeUtf8(responseHeaders.value(i))
.writeByte('\n');
}
sink.writeUtf8(SENT_MILLIS)
.writeUtf8(": ")
.writeDecimalLong(sentRequestMillis)
.writeByte('\n');
sink.writeUtf8(RECEIVED_MILLIS)
.writeUtf8(": ")
.writeDecimalLong(receivedResponseMillis)
.writeByte('\n');
if (isHttps()) {
sink.writeByte('\n');
sink.writeUtf8(handshake.cipherSuite().javaName())
.writeByte('\n');
writeCertList(sink, handshake.peerCertificates());
writeCertList(sink, handshake.localCertificates());
// The handshake’s TLS version is null on HttpsURLConnection and on older cached responses.
if (handshake.tlsVersion() != null) {
sink.writeUtf8(handshake.tlsVersion().javaName())
.writeByte('\n');
}
}
sink.close();
}
其主要邏輯就是將對應請求的meta數據寫入對應CacheEntry的索引為ENTRY_METADATA(0)的dirtyfile中。
最後再來看Cache.put()
方法的return new CacheRequestImpl(editor);
:
[okhttp3.Cache$CacheRequestImpl]
private final class CacheRequestImpl implements CacheRequest {
private final DiskLruCache.Editor editor;
private Sink cacheOut;
private Sink body;
boolean done;
public CacheRequestImpl(final DiskLruCache.Editor editor) {
this.editor = editor;
this.cacheOut = editor.newSink(ENTRY_BODY);
this.body = new ForwardingSink(cacheOut) {
@Override public void close() throws IOException {
synchronized (Cache.this) {
if (done) {
return;
}
done = true;
writeSuccessCount++;
}
super.close();
editor.commit();
}
};
}
@Override public void abort() {
synchronized (Cache.this) {
if (done) {
return;
}
done = true;
writeAbortCount++;
}
Util.closeQuietly(cacheOut);
try {
editor.abort();
} catch (IOException ignored) {
}
}
@Override public Sink body() {
return body;
}
}
其中close
,abort
方法會調用editor.abort
和editor.commit
來更新日誌,editor.commit
還會將dirtyFile重置為cleanFile作為穩定可用的緩存,相關邏輯在okhttp3.internal.cache.DiskLruCache$Editor.completeEdit
中:
[okhttp3.internal.cache.DiskLruCache$Editor.completeEdit]
synchronized void completeEdit(Editor editor, boolean success) throws IOException {
Entry entry = editor.entry;
if (entry.currentEditor != editor) {
throw new IllegalStateException();
}
// If this edit is creating the entry for the first time, every index must have a value.
if (success && !entry.readable) {
for (int i = 0; i < valueCount; i++) {
if (!editor.written[i]) {
editor.abort();
throw new IllegalStateException("Newly created entry didn't create value for index " + i);
}
if (!fileSystem.exists(entry.dirtyFiles[i])) {
editor.abort();
return;
}
}
}
for (int i = 0; i < valueCount; i++) {
File dirty = entry.dirtyFiles[i];
if (success) {
if (fileSystem.exists(dirty)) {
File clean = entry.cleanFiles[i];
fileSystem.rename(dirty, clean);//將dirtyfile置為cleanfile
long oldLength = entry.lengths[i];
long newLength = fileSystem.size(clean);
entry.lengths[i] = newLength;
size = size - oldLength + newLength;
}
} else {
fileSystem.delete(dirty);//若失敗則刪除dirtyfile
}
}
redundantOpCount++;
entry.currentEditor = null;
//更新日誌
if (entry.readable | success) {
entry.readable = true;
journalWriter.writeUtf8(CLEAN).writeByte(' ');
journalWriter.writeUtf8(entry.key);
entry.writeLengths(journalWriter);
journalWriter.writeByte('\n');
if (success) {
entry.sequenceNumber = nextSequenceNumber++;
}
} else {
lruEntries.remove(entry.key);
journalWriter.writeUtf8(REMOVE).writeByte(' ');
journalWriter.writeUtf8(entry.key);
journalWriter.writeByte('\n');
}
journalWriter.flush();
if (size > maxSize || journalRebuildRequired()) {
executor.execute(cleanupRunnable);
}
}
CacheRequestImpl實現CacheRequest接口,向外部類(主要是CacheInterceptor)透出,外部對象通過CacheRequestImpl更新或寫入緩存數據。
3.8總結
總結起來DiskLruCache主要有以下幾個特點:
- 通過LinkedHashMap實現LRU替換
- 通過本地維護Cache操作日誌保證Cache原子性與可用性,同時為防止日誌過分膨脹定時執行日誌精簡
- 每一個Cache項對應兩個狀態副本:DIRTY,CLEAN。CLEAN表示當前可用狀態Cache,外部訪問到的cache快照均為CLEAN狀態;DIRTY為更新態Cache。由於更新和創建都隻操作DIRTY狀態副本,實現了Cache的讀寫分離
- 每一個Cache項有四個文件,兩個狀態(DIRTY,CLEAN),每個狀態對應兩個文件:一個文件存儲Cache meta數據,一個文件存儲Cache內容數據
最後更新:2017-05-05 10:31:20