Volley源码解析

参考资料:
Android Volley完全解析(四),带你从源码的角度理解Volley
Volley 源码解析
知乎-如何去阅读Android Volley框架源码?
Volley学习笔记之简单使用及部分源码详解
Volley源码学习笔记
Volley的原理解析
Android Volley 源码解析(一),网络请求的执行流程

Volley简介

Volley是一款google开源的网络请求框架, 其实质是对Android的HttpURLConnectio或者HttpClient进行了封装(开了多个线程), 并加入了缓存功能, 因此比较适合高并发的网络请求, 但不适合大文件下载, Volley的官方github地址在这里

用法实例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
mRequestQueue = Volley.newRequestQueue(this);

StringRequest stringRequest = new StringRequest("https://www.baidu.com", new Response.Listener<String>() {
@Override
public void onResponse(String response) {
Log.d(TAG, "response = " + response);
// main
Log.d(TAG, "currentThread = " + Thread.currentThread().getName());
}
}, new Response.ErrorListener() {
@Override
public void onErrorResponse(VolleyError error) {
Log.e(TAG, error.getMessage(), error);
Log.d(TAG, "currentThread = " + Thread.currentThread().getName());
}
});
mRequestQueue.add(stringRequest);

简单举个例子, 先获取一个requestQueue对象, 这个对象不需要每次请求时都获取, 一般只需要获取一次, 设置成app内全局单例或者activity的成员变量就可以了。然后new一个request对象, 然后把这个request加入到requestQueue就行了, 就这么简单。

大体概览

先附一张Volley的工作流程图, 懒得自己画了, 盗的图

有几个单词不认识, 还是翻译一下吧

再把Volley里比较重要的类列举一下

类名 类型 作用
Volley 普通类 对外暴露的 API,主要作用是构建 RequestQueue
Request 抽象类 所有网络请求的抽象类,子类有StringRequest、JsonRequest、ImageRequest
RequestQueue 普通类 存放请求的队列,子类有CacheDispatcher、NetworkDispatcher 和 ResponseDelivery
CacheDispatcher 普通类 用于执行缓存队列请求的线程
NetworkDispatcher 普通类 用于执行网络队列请求的线程
Cache 接口 主要的子类是DiskBasedCache, 用于磁盘缓存
HttpStack 接口 子类有HurlStack和HttpClientStack, 分别使用HttpURLConnection和HttpClient处理http请求
Network 接口 子类有BasicNetwork, 调用 HttpStack 处理请求,并将结果转换成可被 ResponseDelivery 处理的 NetworkResponse
Response 普通类 封装一个解析后的结果以便分发
ResponseDelivery 接口 子类是ExecutorDelivery, 分发请求结果, 并将其回调到主线程

创建RequestQueue

因为执行一个网络请求分三步, 第一步是创建RequestQueue, 第二步是创建request, 第三步是把request加入到RequestQueue。所以我们先来分析第一步

1
mRequestQueue = Volley.newRequestQueue(this);

点进去看看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
public static RequestQueue newRequestQueue(Context context) {
return newRequestQueue(context, (BaseHttpStack) null);
}

public static RequestQueue newRequestQueue(Context context, BaseHttpStack stack) {
BasicNetwork network;
if (stack == null) {
if (Build.VERSION.SDK_INT >= 9) {
network = new BasicNetwork(new HurlStack());
} else {
// Prior to Gingerbread, HttpUrlConnection was unreliable.
// See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
// At some point in the future we'll move our minSdkVersion past Froyo and can
// delete this fallback (along with all Apache HTTP code).
String userAgent = "volley/0";
try {
String packageName = context.getPackageName();
PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
userAgent = packageName + "/" + info.versionCode;
} catch (NameNotFoundException e) {
}

network = new BasicNetwork(
new HttpClientStack(AndroidHttpClient.newInstance(userAgent)));
}
} else {
network = new BasicNetwork(stack);
}

return newRequestQueue(context, network);
}

private static RequestQueue newRequestQueue(Context context, Network network) {
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
queue.start();
return queue;
}

这里创建了根据版本号创建了一个HttpStack对象。然后用BasicNetwork对HttpStack进行包装,构建了一个Network对象。Network对象是用来处理网络请求的。
可以看到, 在创建HttpStack对象的时候判断了一下手机的sdk版本号。如果sdk的版本大于等于9, 则创建基于HttpUrlConnection的HurlStack, 否则创建基于HttpClient的HttpClientStack。因为在Android2.3(SDK = 9)之前,HttpURLConnection存在一些bug,所以这时候用HttpClient来进行网络请求会比较合适。在Android 2.3版本及以后,HttpURLConnection则是最佳的选择。不过现在的手机基本上都是4.0以上的, 所以这里的if else直接看成走HurlStack的分支就行了。接着往下看。
先创建了一个默认的缓存文件夹, 然后利用传入的Network对象new了一个RequestQueue对象, 并调用start()方法。
先看一下这个RequestQueue的构造方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public RequestQueue(Cache cache, Network network) {
this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
}

public RequestQueue(Cache cache, Network network, int threadPoolSize) {
this(cache, network, threadPoolSize, new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}

public RequestQueue(Cache cache, Network network, int threadPoolSize,
ResponseDelivery delivery) {
mCache = cache;
mNetwork = network;
mDispatchers = new NetworkDispatcher[threadPoolSize];
mDelivery = delivery;
}

为了看的更直观一点, 我把代码简化成这样

1
2
3
4
5
6
7
8
public RequestQueue(Cache cache, Network network) {
mCache = cache;
mNetwork = network;
// DEFAULT_NETWORK_THREAD_POOL_SIZE = 4
mDispatchers = new NetworkDispatcher[DEFAULT_NETWORK_THREAD_POOL_SIZE];
Handler mainThreadHandler = new Handler(Looper.getMainLooper());
mDelivery = new ExecutorDelivery(mainThreadHandler);
}

这里创建了一个length = 4的是NetworkDispatcher数组, 然后创建了一个ExecutorDelivery对象mDelivery。
主要说说mDelivery这个对象的作用, 看一下它的相关方法。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
public ExecutorDelivery(final Handler handler) {
// Make an Executor that just wraps the handler.
mResponsePoster = new Executor() {
@Override
public void execute(Runnable command) {
handler.post(command);
}
};
}

@Override
public void postResponse(Request<?> request, Response<?> response) {
postResponse(request, response, null);
}

@Override
public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
request.markDelivered();
request.addMarker("post-response");
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}

@Override
public void postError(Request<?> request, VolleyError error) {
request.addMarker("post-error");
Response<?> response = Response.error(error);
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, null));
}

当调用mDelivery.postResponse()和postError()方法的时候, 调用的是mResponsePoster.execute()方法, 也就是调用handler.post(command)方法, 而这个handler可以从上面看到是和主线程绑定的, 所以会将response结果回调到主线程。所以说mDelivery这个对象主要是用于将子线程中的网络请求结果发送到主线程中处理。

思路拉回来, 让我们再回到RequestQueue, 看一下queue.start()方法的源码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public void start() {
// Make sure any currently running dispatchers are stopped.
// 确保所有的调度都已经停止
stop();
// Create the cache dispatcher and start it.
// 创建一个缓存调度线程并启动它, CacheDispatcher继承自Thread
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start();

// Create network dispatchers (and corresponding threads) up to the pool size.
// 创建网络调度调度线程(NetworkDispatcher继承自Thread), 直到数量达到了缓存池的容量, 并依次开启
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}

首先确保所有的调度线层都已经停止, 然后创建了一个缓存调度线程, 然后开启这个线程。接着new了4个网络调度线程, 然后分别开启这4个线程。
对于这几个线程, 我们选取一个CacheDispatcher, 稍微看一下它的run()方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
@Override
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
// 设置线程的优先级为background
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

// Make a blocking call to initialize the cache.
// 初始化缓存目录
mCache.initialize();

while (true) {
try {
processRequest();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
}
}
}

里面有一个死循环, 所以这个线程是一只运行不会退出的。
另外提前”剧透”一下, processRequest()方法中的代码, 会阻塞在从缓存队列中取request那里(如果缓存队列为空的话), 直到取到request(缓存队列中添加进了新的request), 就会往下执行去处理这个request。
这些我们后面详细分析, 现在只需要有这样一个概念: 开启这几个线程之后会不断的从队列中取出request然后去处理。

RequestQueue.add(Request)

现在看把request加入到RequestQueue这个过程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
/**
* Adds a Request to the dispatch queue.
* @param request The request to service
* @return The passed-in request
*/
public <T> Request<T> add(Request<T> request) {
// Tag the request as belonging to this queue and add it to the set of current requests.
// 将这个request标记为属于这个queue并且把它加入当前所有请求的集合中
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
// mCurrentRequests是一个hashSet, 确保其中元素的唯一性
mCurrentRequests.add(request);
}

// Process requests in the order they are added.
// 按这些request被加入的顺序来处理它们
// 设置序列号
request.setSequence(getSequenceNumber());
// 添加标记: 已经被加入到队列当中
request.addMarker("add-to-queue");

// If the request is uncacheable, skip the cache queue and go straight to the network.
// 如果这个request不能被缓存, 跳过缓存队列而直接加入到网络队列中
if (!request.shouldCache()) {
mNetworkQueue.add(request);
return request;
}
mCacheQueue.add(request);
return request;
}

分以下这么几个步骤:

  1. 将当前request和处理request的queue进行关联,这样当该request结束的时候就会通知负责处理该request的queue
  2. 将request加入到mCurrentRequests这个hashSet中, 使用haseSet是为了去重。mCurrentRequests保存着所有被requestQueue处理的request
  3. 如果request不允许被缓存, 则把这个request放到网络请求的队列中
  4. 如果request允许被缓存, 那就把这个request加入到缓存的队列中
    那么问题来了, request被加入到队列中会发生什么呢?它会被运送到哪里去呢?是什么时候被取出来的呢?答案就在CacheDispatcher和NetworkDispatcher。

    CacheDispatcher

    先看CacheDispatcher的构造方法
1
2
3
4
5
6
7
8
9
public CacheDispatcher(
BlockingQueue<Request<?>> cacheQueue, BlockingQueue<Request<?>> networkQueue,
Cache cache, ResponseDelivery delivery) {
mCacheQueue = cacheQueue;
mNetworkQueue = networkQueue;
mCache = cache;
mDelivery = delivery;
mWaitingRequestManager = new WaitingRequestManager(this);
}

RequestQueue中的mCacheDispatcher是在调用start()方法的时候被new出来的, 从构造方法中可以看出, mCacheDispatcher得到了request中的mCacheQueue, mNetworkQueue, mCache和mDelivery(就是那个ExecutorDelivery)。
然后看run()方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
@Override
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
// 设置线程的优先级为background
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

// Make a blocking call to initialize the cache.
// 初始化缓存目录
mCache.initialize();

while (true) {
try {
processRequest();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
}
}
}

private void processRequest() throws InterruptedException {
// Get a request from the cache triage queue, blocking until
// at least one is available.
// 从缓存队列中取出request, 这段过程会阻塞, 直到获得了一个请求
final Request<?> request = mCacheQueue.take();
request.addMarker("cache-queue-take");

// If the request has been canceled, don't bother dispatching it.
// 如果一个request被取消了, 不用费心思去分发它了(直接干掉它)
if (request.isCanceled()) {
request.finish("cache-discard-canceled");
return;
}

// Attempt to retrieve this item from cache.
// 通过request的url尝试去获取缓存
// attention 这个方法需要进去看一下
Cache.Entry entry = mCache.get(request.getCacheKey());
// 如果获取缓存失败(miss),
if (entry == null) {
request.addMarker("cache-miss");
// Cache miss; send off to the network dispatcher.
// 如果waittingRequests中还没有这个request, 将它加入网络队列中
if (!mWaitingRequestManager.maybeAddToWaitingRequests(request)) {
mNetworkQueue.put(request);
}
return;
}

// If it is completely expired, just send it to the network.
// 如果缓存过期了
if (entry.isExpired()) {
// 将这个request标记为"缓存命中, 但是过期"
request.addMarker("cache-hit-expired");
// 给这个request设置缓存
request.setCacheEntry(entry);
// 如果缓存过期了, waittingRequests中还没有这个request, 还是需要请求网络
if (!mWaitingRequestManager.maybeAddToWaitingRequests(request)) {
mNetworkQueue.put(request);
}
return;
}

// We have a cache hit; parse its data for delivery back to the request.
// 我们在这里命中了缓存(并且没有过期), 解析它的数据并回传给request
// 添加标记"缓存命中"
request.addMarker("cache-hit");
// 解析response
Response<?> response = request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders));
// 添加标记"缓存命中并且已经被解析"
request.addMarker("cache-hit-parsed");

// 如果缓存不需要刷新, 直接发送响应即可
if (!entry.refreshNeeded()) {
// Completely unexpired cache hit. Just deliver the response.
mDelivery.postResponse(request, response);
} else {
// Soft-expired cache hit. We can deliver the cached response,
// but we need to also send the request to the network for
// refreshing.

// 缓存没有过期, 但是需要更新.
// 我们分发这个缓存, 但我们也需要把这个request发送到网络来刷新它

// 添加标记"缓存命中但是需要刷新"
request.addMarker("cache-hit-refresh-needed");
// 给request设置缓存
request.setCacheEntry(entry);
// Mark the response as intermediate.
// 将这个response标记为过渡的
response.intermediate = true;

// 如果waittingRequests中没有这个request
if (!mWaitingRequestManager.maybeAddToWaitingRequests(request)) {
// Post the intermediate response back to the user and have
// the delivery then forward the request along to the network.

// 将这个过渡的response返回给用户, 并且放到网络请求的队列中
mDelivery.postResponse(request, response, new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(request);
} catch (InterruptedException e) {
// Restore the interrupted status
Thread.currentThread().interrupt();
}
}
});
} else {
// request has been added to list of waiting requests
// to receive the network response from the first request once it returns.

// request已经被添加到waittingRequest中来从第一个request中接收网络response, 一旦它返回了
mDelivery.postResponse(request, response);
}
}
}

我们仔细看一下processRequest()这个方法中的代码。首先会从缓存队列中取出request, 如果为空则会阻塞在mCackeQueue.take()这一行不再走下去了。其实可以看到, 之前RequestQueue.add(Request)方法中添加的request是在这里被取出来了。如果被取出来的request这时候被取消了, 那就立即结束掉。然后根据request的url去获取缓存Cache.Entry, 如果缓存为空则将将该request添加到网络队列中。如果缓存不为空, 接着再判断缓存是否过期, 如果过期, 还是将这个request添加到网络队列。如果这个缓存没有过期, 就将这个缓存解析为response对象。接着判断这个缓存是否需要刷新, 如果不需要刷新, 直接用mDelivery(就是那个ExecutorDelivery)将结果回调到主线程; 如果需要刷新, 把response回调到主线程, 并且把request添加到网络队列。

NetworkDispatcher

接着看看这个NetworkDispatcher

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
public NetworkDispatcher(BlockingQueue<Request<?>> queue,
Network network, Cache cache, ResponseDelivery delivery) {
mQueue = queue;
mNetwork = network;
mCache = cache;
mDelivery = delivery;
}

@Override
public void run() {
// 设置线程优先级为background
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
while (true) {
try {
processRequest();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
}
}
}

private void processRequest() throws InterruptedException {
// Take a request from the queue.
// 从队列中取出request
Request<?> request = mQueue.take();

// 从启动开始过去的时间
long startTimeMs = SystemClock.elapsedRealtime();
try {
// 添加标记"已经从网络队列中取出"
request.addMarker("network-queue-take");

// If the request was cancelled already, do not perform the
// network request.
// 如果这个request已经被取消了, 不要再去执行这个request
if (request.isCanceled()) {
request.finish("network-discard-cancelled");
request.notifyListenerResponseNotUsable();
return;
}

addTrafficStatsTag(request);

// Perform the network request.
// 执行网络请求, 获取到response
NetworkResponse networkResponse = mNetwork.performRequest(request);
// 添加标记"http请求完成"
request.addMarker("network-http-complete");

// If the server returned 304 AND we delivered a response already,
// we're done -- don't deliver a second identical response.

// 如果返回了304, 并且request已经被分发过了
if (networkResponse.notModified && request.hasHadResponseDelivered()) {
// 结束请求
request.finish("not-modified");
// 通知所有listener没有造成任何可用的响应
request.notifyListenerResponseNotUsable();
return;
}

// Parse the response here on the worker thread.
// 在工作线程解析response
Response<?> response = request.parseNetworkResponse(networkResponse);
// 添加标记"网络解析完成"
request.addMarker("network-parse-complete");

// Write to cache if applicable.
// TODO: Only update cache metadata instead of entire record for 304s.
// 如果request是可缓存的, 并且response的缓存不为null
if (request.shouldCache() && response.cacheEntry != null) {
// 将数据保存到磁盘中
// attention 这里进去看一下
mCache.put(request.getCacheKey(), response.cacheEntry);
// 添加标记"网络缓存已经被写入"
request.addMarker("network-cache-written");
}

// Post the response back.
// 将request标记为已经有一个响应来分发它
request.markDelivered();
// 分发数据
mDelivery.postResponse(request, response);
// 通知所有的listener已经有一个合法的响应被接收了
request.notifyListenerResponseReceived(response);
} catch (VolleyError volleyError) {
volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
parseAndDeliverNetworkError(request, volleyError);
request.notifyListenerResponseNotUsable();
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
VolleyError volleyError = new VolleyError(e);
volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
mDelivery.postError(request, volleyError);
request.notifyListenerResponseNotUsable();
}
}

有点和CacheDispatcher类似, 解读一下代码: 首先从网络请求的队列中取出一个request, 如果这个队列为空的话就会阻塞, 但是一旦队列中添加了request, 就能取出来。接着, 如果这时候request被取消了, 那就立刻结束。如果没有被取消, 那就执行网络请求, 这个过程可能又会阻塞。得到网络请求的响应之后, 如果服务器返回了304, 那就结束这个请求。如果不是304, 那就根据request解析这个网络请求。如果request允许被缓存, 那就将解析后的response保存到磁盘, 最后将这个response分发到主线程。我们在看看具体的网络请求是怎样的吧。查看mNetwork.performRequest(request)方法。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
@Override
public NetworkResponse performRequest(Request<?> request) throws VolleyError {
long requestStart = SystemClock.elapsedRealtime();
while (true) {
HttpResponse httpResponse = null;
byte[] responseContents = null;
List<Header> responseHeaders = Collections.emptyList();
try {
// Gather headers.
Map<String, String> additionalRequestHeaders =
getCacheHeaders(request.getCacheEntry());
httpResponse = mBaseHttpStack.executeRequest(request, additionalRequestHeaders);
int statusCode = httpResponse.getStatusCode();

responseHeaders = httpResponse.getHeaders();
// Handle cache validation.
if (statusCode == HttpURLConnection.HTTP_NOT_MODIFIED) {
Entry entry = request.getCacheEntry();
if (entry == null) {
return new NetworkResponse(HttpURLConnection.HTTP_NOT_MODIFIED, null, true,
SystemClock.elapsedRealtime() - requestStart, responseHeaders);
}
// Combine cached and response headers so the response will be complete.
List<Header> combinedHeaders = combineHeaders(responseHeaders, entry);
return new NetworkResponse(HttpURLConnection.HTTP_NOT_MODIFIED, entry.data,
true, SystemClock.elapsedRealtime() - requestStart, combinedHeaders);
}

// Some responses such as 204s do not have content. We must check.
InputStream inputStream = httpResponse.getContent();
if (inputStream != null) {
responseContents =
inputStreamToBytes(inputStream, httpResponse.getContentLength());
} else {
// Add 0 byte response as a way of honestly representing a
// no-content request.
responseContents = new byte[0];
}

// if the request is slow, log it.
long requestLifetime = SystemClock.elapsedRealtime() - requestStart;
logSlowRequests(requestLifetime, request, responseContents, statusCode);

if (statusCode < 200 || statusCode > 299) {
throw new IOException();
}
return new NetworkResponse(statusCode, responseContents, false,
SystemClock.elapsedRealtime() - requestStart, responseHeaders);
} catch (SocketTimeoutException e) {
attemptRetryOnException("socket", request, new TimeoutError());
} catch (MalformedURLException e) {
throw new RuntimeException("Bad URL " + request.getUrl(), e);
} catch (IOException e) {
int statusCode;
if (httpResponse != null) {
statusCode = httpResponse.getStatusCode();
} else {
throw new NoConnectionError(e);
}
VolleyLog.e("Unexpected response code %d for %s", statusCode, request.getUrl());
NetworkResponse networkResponse;
if (responseContents != null) {
networkResponse = new NetworkResponse(statusCode, responseContents, false,
SystemClock.elapsedRealtime() - requestStart, responseHeaders);
if (statusCode == HttpURLConnection.HTTP_UNAUTHORIZED ||
statusCode == HttpURLConnection.HTTP_FORBIDDEN) {
attemptRetryOnException("auth",
request, new AuthFailureError(networkResponse));
} else if (statusCode >= 400 && statusCode <= 499) {
// Don't retry other client errors.
throw new ClientError(networkResponse);
} else if (statusCode >= 500 && statusCode <= 599) {
if (request.shouldRetryServerErrors()) {
attemptRetryOnException("server",
request, new ServerError(networkResponse));
} else {
throw new ServerError(networkResponse);
}
} else {
// 3xx? No reason to retry.
throw new ServerError(networkResponse);
}
} else {
attemptRetryOnException("network", request, new NetworkError());
}
}
}
}

代码好长啊, 我就大概说明一下, 就是使用了一开始的BaseHttpStack执行了网络请求, 如果想看HttpUrlConnection的具体请求过程, 还是要从mBaseHttpStack.executeRequest(request, additionalRequestHeaders)这个方法点进去看, 这里就不再赘述了。

请求的结束和取消

先看Request.finish(String)方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
void finish(final String tag) {
if (mRequestQueue != null) {
// 调用的requestQueue的finish方法, 进去看一眼
mRequestQueue.finish(this);
}
if (MarkerLog.ENABLED) {
final long threadId = Thread.currentThread().getId();
if (Looper.myLooper() != Looper.getMainLooper()) {
// If we finish marking off of the main thread, we need to
// actually do it on the main thread to ensure correct ordering.
Handler mainThread = new Handler(Looper.getMainLooper());
mainThread.post(new Runnable() {
@Override
public void run() {
mEventLog.add(tag, threadId);
mEventLog.finish(Request.this.toString());
}
});
return;
}

mEventLog.add(tag, threadId);
mEventLog.finish(this.toString());
}
}

看一下RequestQueue.finish(Request)方法

1
2
3
4
5
6
7
8
9
10
11
12
13
<T> void finish(Request<T> request) {
// Remove from the set of requests currently being processed.
// 从现有队列中移除该request
synchronized (mCurrentRequests) {
mCurrentRequests.remove(request);
}
synchronized (mFinishedListeners) {
for (RequestFinishedListener<T> listener : mFinishedListeners) {
listener.onRequestFinished(request);
}
}

}

这里首先从mCurrentRequests里面移除了request, 然后回调所有的RequestFinishedListener, 这个mFinishedListeners是从request的一个可选方法传进来的, 反正我一般不传

1
2
3
4
5
public  <T> void addRequestFinishedListener(RequestFinishedListener<T> listener) {
synchronized (mFinishedListeners) {
mFinishedListeners.add(listener);
}
}

再看一下RequestQueue.cancelAll(Object)方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
public void cancelAll(final Object tag) {
if (tag == null) {
throw new IllegalArgumentException("Cannot cancelAll with a null tag");
}
cancelAll(new RequestFilter() {
@Override
public boolean apply(Request<?> request) {
return request.getTag() == tag;
}
});
}

public void cancelAll(RequestFilter filter) {
synchronized (mCurrentRequests) {
for (Request<?> request : mCurrentRequests) {
if (filter.apply(request)) {
request.cancel();
}
}
}
}

其中cancalAll(RequestFilter filter)中的filter.apply(request)方法其实就是调用的cancalAll(final Object tag)中的return request.getTag() == tag;这条语句

结语

Volley的源码分析终于差不多了, 一些细枝末节我也暂且不去追究了。其实读源码的目的主要是吸取框架的思想和思路, 我们提倡不重复造轮子, 但是不等于不需要知道轮子是如何制造出来的。但有时候最好的学习方法就是自己亲身去体验一下轮子的制造过程(重复造轮子)😂
github repo: https://github.com/mundane799699/AndroidProjects/tree/master/VolleyDemo

0%