The Android Volley framework uses source code sharing

  • 2021-07-10 20:43:07
  • OfStack

In the past, Xutils was used for network communication on Android, because it can deal with pictures and networks by the way. Later, it was found that HttpClient was used in Xutils and Google abolished HttpClient in version 6.0, so we began to look for a new network framework, and okhttp was also used, but it was acting on UI threads, and handler was needed to use it, so we used Volley framework first. Here I first analyze the source code of the simple network request of Volley framework.

A simple procedure for requesting network data using Volley:


RequestQueue queue = Volley.newRequestQueue(this); // Instantiation 1 Request queues  Google Recommended writing 1 Singleton class   Get only 11 Queues   
 StringRequest request = new StringRequest(Request.Method.POST, url1, new Response.Listener<String>() { 
      @Override 
      public void onResponse(String response) { 
        Toast.makeText(MainActivity.this, "success"+response, Toast.LENGTH_SHORT).show(); 
      } 
    }, new Response.ErrorListener() { 
      @Override 
      public void onErrorResponse(VolleyError error) { 
        Toast.makeText(MainActivity.this, " Failed "+error.getMessage(), Toast.LENGTH_SHORT).show(); 
      } 
    }){ 
      @Override 
      protected Map<String, String> getParams() throws AuthFailureError { // Override this function to submit parameters   You can also override 1 A Request Implement this method  
        Map<String,String> params = new HashMap<>(); 
        params.put(aaa+"name","1233555"); // Parameter  
        return params; 
      } 
    }; 
    queue.add(request); 

The processing of the request started at the time of newRequestQueue, but there was no request in the request queue at that time, so it was blocked. When the method of add was executed, it started to really request the network
So let's look at the queue. add (request) method first


public <T> Request<T> add(Request<T> request) { 
  // Tag the request as belonging to this queue and add it to the set of current requests. 
  request.setRequestQueue(this);  
  synchronized (mCurrentRequests) { 
    mCurrentRequests.add(request);  // Add to the current queue  
  } 
 
  // Process requests in the order they are added. 
  request.setSequence(getSequenceNumber()); 
  request.addMarker("add-to-queue"); // Set flag  
 
  // If the request is uncacheable, skip the cache queue and go straight to the network. 
  if (!request.shouldCache()) { // Depending on whether caching is needed or not   If caching is not required   Join the network task queue directly   Then return   If caching is required   Then add the cache queue in the following code   Caching is required by default   
    mNetworkQueue.add(request); 
    return request; 
  } 
 
  // Insert request into stage if there's already a request with the same cache key in flight. 
  synchronized (mWaitingRequests) { 
    String cacheKey = request.getCacheKey(); 
    if (mWaitingRequests.containsKey(cacheKey)) { // Determines whether the request currently being processed and can be cached contains the request key  If the inclusion description already exists 1 Two identical requests   Then join it  
      // There is already a request in flight. Queue up. 
      Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey); 
      if (stagedRequests == null) { 
        stagedRequests = new LinkedList<Request<?>>(); 
      } 
      stagedRequests.add(request); 
      mWaitingRequests.put(cacheKey, stagedRequests); 
      if (VolleyLog.DEBUG) { 
        VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey); 
      } 
    } else { // If you do not include   Join 1 An empty request to   Temporary queue   And then added to the cache queue   
      // Insert 'null' queue for this cacheKey, indicating there is now a request in 
      // flight. 
      mWaitingRequests.put(cacheKey, null);  
      mCacheQueue.add(request); 
    } 
    return request; 
  } 
} 

The add method first adds to the mCurrentRequests collection which holds all the requests processed by the queue and then determines whether the request needs to be cached, If caching is not required, Then directly join the mNetworkQueue queue and wait for processing, Eventually join the mCacheQueue queue if necessary, Because RequestQueue always processes cached tasks first when processing requests, When processing the cache, if there is no cache in the first processing, it will still join the mNetworkQueue queue for processing. If there is a cache, you can get the cache directly. After that, it is judged whether there is the same request in the current request, if there is, then the request is added to the temporary storage set, and if there is no, an empty request is added to the temporary storage queue, which is used to judge whether there is the same request as the request in the future, and then it is added to the cache queue.

Then let's look at the creation process of RequstQueue


public static RequestQueue newRequestQueue(Context context, HttpStack stack) { 
    File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR); // Create 1 Files for caching  
 
    String userAgent = "volley/0";  // User Agent Initialization   
    try { 
      String packageName = context.getPackageName(); 
      PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0); 
      userAgent = packageName + "/" + info.versionCode; // The user agent is app Package name + Version number  
    } catch (NameNotFoundException e) { 
    } 
 
    if (stack == null) { // If it is not passed in HttpStack  Then adopt the following default    You can rewrite the extension here by yourself HttpStack  It reflects the high extensibility of the framework    
      if (Build.VERSION.SDK_INT >= 9) { // If sdk Version higher than 2.3  Adopt HurlStack  Internally is httpUrlConnection Realization   
        stack = new HurlStack(); 
      } else { // If the version is lower than 2.3  Adopt httpClientStack  
        // Prior to Gingerbread, HttpUrlConnection was unreliable. 
        // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html 
        stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent)); 
      } 
    } 
 
    Network network = new BasicNetwork(stack); // Create 1 Network work   Only works on the requesting network   
 
    RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network); // Instantiation 1 Request queues   Pass in parameters  
    queue.start(); 
 
    return queue; 
  } 

</pre><pre code_snippet_id="1680121" snippet_file_name="blog_20160512_5_2241745" name="code" class="java">public RequestQueue(Cache cache, Network network, int threadPoolSize) { // Constructor   The default ExecutorDelivery  Used for callbacks  
    this(cache, network, threadPoolSize, 
        new ExecutorDelivery(new Handler(Looper.getMainLooper()))); 
  } 

The creation process of RequestQueue is also relatively simple. Judge whether to use HttpURLConnection or HttpClient according to the version number of sdk. Because there is a major bug in httpUrlConnection before 2.3, httpUrlConnection is replaced by HttpClient, while httpUrlConnection is small in size and supports gzip compression and cache, and its speed is faster than httpClient and gradually optimized. Therefore, after selecting httpUrlConnection, create RequestQueue queue according to the created NetWork and then open it
Then we look at the start method of queue


public void start() { 
    stop(); // Make sure any currently running dispatchers are stopped. 
    // Create the cache dispatcher and start it. 
    mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery); // Create 1 Cache scheduler   Yes 1 Threads  start Post-execution run Method  
    mCacheDispatcher.start(); 
 
    // Create network dispatchers (and corresponding threads) up to the pool size. 
    for (int i = 0; i < mDispatchers.length; i++) { // There will be by default 4 A NetworkDispatcher  In order to improve efficiency,   Execute netWorkQueue In request 
      NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, 
          mCache, mDelivery); 
      mDispatchers[i] = networkDispatcher; 
      networkDispatcher.start(); 
    } 
  } 

This method first executes the cache scheduler thread and then executes four network work scheduler threads. Because it will be judged whether it has been cached in the cache scheduler, If the cache has passed and has not expired, the cache will be reused directly, and the task will not be added to netWordQueue, so the following NetWork scheduler thread will not get the request and block and will not execute. If there is no cache, the cache scheduler thread will add the request to the NetWork queue, and the following netWork scheduler will get the request and execute it
Let's take a closer look at the source code of CacheDispatcher thread under 1:

The code for the run method is relatively long. Let's look at Part 1 separately:


@Override 
 public void run() { 
   if (DEBUG) VolleyLog.v("start new dispatcher"); 
   Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); // Set the priority of threads    Value is 10 
 
   // Make a blocking call to initialize the cache. 
   mCache.initialize(); // Initialization 1 Lower cache   
 
   while (true) { 
     try { 
       // Get a request from the cache triage queue, blocking until 
       // at least one is available. 
       final Request<?> request = mCacheQueue.take();  // Fetch from the cache queue 1 Request   If not, it will block   
       request.addMarker("cache-queue-take");  // Add 1 Tags  
 
       // If the request has been canceled, don't bother dispatching it. 
       if (request.isCanceled()) {   
         request.finish("cache-discard-canceled"); 
         continue; 
       } 
 
       // Attempt to retrieve this item from cache. 
       Cache.Entry entry = mCache.get(request.getCacheKey()); // Read the cache from the cache   
       if (entry == null) { // If the cache is not read   
         request.addMarker("cache-miss"); // Add cache miss Mark    
         // Cache miss; send off to the network dispatcher.  
         mNetworkQueue.put(request); // Failed to swap cache   Add to netWork Waiting for request in  
         continue; 
       } 
 
       // If it is completely expired, just send it to the network. 
       if (entry.isExpired()) { // Determine if the cache is out of date   If it expires   Then add it to the netWork Waiting for request in   
         request.addMarker("cache-hit-expired"); 
         request.setCacheEntry(entry); 
         mNetworkQueue.put(request); 
         continue; 
       } 

Part 2:


// We have a cache hit; parse its data for delivery back to the request. 
        request.addMarker("cache-hit"); // Execution here means that the cache has not expired   And you can use the   
        Response<?> response = request.parseNetworkResponse( // Parses the read cache content into Response Object   
            new NetworkResponse(entry.data, entry.responseHeaders)); 
        request.addMarker("cache-hit-parsed"); // Add a tag  
 
        if (!entry.refreshNeeded()) { // If the cache does not need to be refreshed   Direct call  mDelivery.postResponse Method    In which a callback occurs request Adj. listener Interface  
          // Completely unexpired cache hit. Just deliver the response. 
          mDelivery.postResponse(request, response); 
        } else { // If you need to refresh    Add a request mNetworkQueue Medium   Waiting for a request  
          // Soft-expired cache hit. We can deliver the cached response, 
          // but we need to also send the request to the network for 
          // refreshing. 
          request.addMarker("cache-hit-refresh-needed"); 
          request.setCacheEntry(entry); 
 
          // Mark the response as intermediate. 
          response.intermediate = true; 
 
          // Post the intermediate response back to the user and have 
          // the delivery then forward the request along to the network. 
          mDelivery.postResponse(request, response, new Runnable() {   
            @Override 
            public void run() { 
              try { 
                mNetworkQueue.put(request); 
              } catch (InterruptedException e) { 
                // Not much we can do about this. 
              } 
            } 
          }); 
        } 
 
      } catch (InterruptedException e) { 
        // We may have been interrupted because it was time to quit. 
        if (mQuit) { 
          return; 
        } 
        continue; 
      } 
    } 
  } 

The specific process of the above code is also very simple. First, take out a request from the cached request queue. Look in the cache to see if there is a cache for the request, If not, the request is put into the NetWork scheduler to wait for the call. If there are several cases, if it is empty, put it into NetWOrk. If it expires, put it into NetWork. If it does not need to refresh, it will directly get the response information from the cache and parse it. Then use mDelivery callback interface. If it needs to refresh, put it into NetWOrd queue to wait for the call. . .

Let's look at the code of NetworkDispatcher thread again, which is similar to the code of CacheDispatcher:


@Override 
  public void run() { 
    Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); // Set priority  10 
    while (true) { 
      long startTimeMs = SystemClock.elapsedRealtime(); // Gets the request execution start time  
      Request<?> request; 
      try { 
        // Take a request from the queue. 
        request = mQueue.take(); // Get from the queue 1 Request   Blocking if not   
      } catch (InterruptedException e) { 
        // We may have been interrupted because it was time to quit. 
        if (mQuit) { 
          return; 
        } 
        continue; 
      } 
 
      try { 
        request.addMarker("network-queue-take"); 
 
        // If the request was cancelled already, do not perform the 
        // network request. 
        if (request.isCanceled()) {   
          request.finish("network-discard-cancelled"); 
          continue; 
        } 
 
        addTrafficStatsTag(request);  
 
        // Perform the network request. 
        NetworkResponse networkResponse = mNetwork.performRequest(request); // The function that actually executes the request    And return a response  
        request.addMarker("network-http-complete"); 
 
        // If the server returned 304 AND we delivered a response already, 
        // we're done -- don't deliver a second identical response. 
        if (networkResponse.notModified && request.hasHadResponseDelivered()) {  
          request.finish("not-modified"); 
          continue; 
        } 
 
        // Parse the response here on the worker thread. 
        Response<?> response = request.parseNetworkResponse(networkResponse); // Analytic response   
        request.addMarker("network-parse-complete"); 
 
        // Write to cache if applicable. 
        // TODO: Only update cache metadata instead of entire record for 304s. 
        if (request.shouldCache() && response.cacheEntry != null) { // If caching is required   Then store the response information in the cache   
          mCache.put(request.getCacheKey(), response.cacheEntry); 
          request.addMarker("network-cache-written"); 
        } 
 
        // Post the response back. 
        request.markDelivered(); 
        mDelivery.postResponse(request, response); // Callback afterwards 1 Some methods   
      } catch (VolleyError volleyError) { 
        volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs); 
        parseAndDeliverNetworkError(request, volleyError); // Callback error interface   
      } catch (Exception e) { 
        VolleyLog.e(e, "Unhandled exception %s", e.toString()); 
        VolleyError volleyError = new VolleyError(e); 
        volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs); 
        mDelivery.postError(request, volleyError);  // Callback error interface  
      } 
    } 
  } 

The execution process of NetworkDispatcher thread first obtains a request from networkDispatch and then judges whether to cancel if not then executes the performRequest method of NetWOrk to execute the http request. This function is the real request data inside. After the request, it judges whether to put it into the cache according to the set shouldCache flag and then calls back some interface methods to complete a request

Finally, let's take a look at how the NetWork class mNetwork. performRequest (request) method submits the request. The code is long but not difficult:


@Override 
  public NetworkResponse performRequest(Request<?> request) throws VolleyError { 
    long requestStart = SystemClock.elapsedRealtime(); // Record start time   
    while (true) { 
      HttpResponse httpResponse = null;  
      byte[] responseContents = null; 
      Map<String, String> responseHeaders = Collections.emptyMap(); // Initialization response header is empty   
      try { 
        // Gather headers. 
        Map<String, String> headers = new HashMap<String, String>(); // Request header   
        addCacheHeaders(headers, request.getCacheEntry()); // Add a request header from the cache   
        httpResponse = mHttpStack.performRequest(request, headers); // Call HttpStack Request the network by the method of   
        StatusLine statusLine = httpResponse.getStatusLine(); 
        int statusCode = statusLine.getStatusCode(); 
 
        responseHeaders = convertHeaders(httpResponse.getAllHeaders()); // Get the response header   
        // Handle cache validation. 
        if (statusCode == HttpStatus.SC_NOT_MODIFIED) {  // If is 304  Read cache   
 
          Entry entry = request.getCacheEntry(); // See if it has been cached before   
          if (entry == null) { // If previously cached is null   Then   Indicates that the last cached request was also empty   Direct return response 
            return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED, null, 
                responseHeaders, true, 
                SystemClock.elapsedRealtime() - requestStart); 
          } 
 
          // A HTTP 304 response does not have all header fields. We 
          // have to use the header fields from the cache entry plus 
          // the new ones from the response. 
          // http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.5 
          entry.responseHeaders.putAll(responseHeaders); // If not empty   Then add a header   Then return   Data   
          return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED, entry.data, 
              entry.responseHeaders, true, 
              SystemClock.elapsedRealtime() - requestStart); 
        } 
 
        // Some responses such as 204s do not have content. We must check. 
        if (httpResponse.getEntity() != null) { // No 304 The situation of   

     responseContents = entityToBytes(httpResponse.getEntity()); // Get the content of the response   You can return the response below   
    } else { 
     // Add 0 byte response as a way of honestly representing a 
     // no-content request. 
     responseContents = new byte[0]; 
    } 
 
    // if the request is slow, log it. 
    long requestLifetime = SystemClock.elapsedRealtime() - requestStart; 
    logSlowRequests(requestLifetime, request, responseContents, statusLine); 
 
    if (statusCode < 200 || statusCode > 299) { 
      throw new IOException(); 
    } 
    return new NetworkResponse(statusCode, responseContents, responseHeaders, false, 
        SystemClock.elapsedRealtime() - requestStart); 
  } catch (SocketTimeoutException e) { 
    attemptRetryOnException("socket", request, new TimeoutError()); 
  } catch (ConnectTimeoutException e) { 
    attemptRetryOnException("connection", request, new TimeoutError()); 
  } catch (MalformedURLException e) { 
    throw new RuntimeException("Bad URL " + request.getUrl(), e); 
  } catch (IOException e) { 
    int statusCode = 0; 
    NetworkResponse networkResponse = null; 
    if (httpResponse != null) { 
      statusCode = httpResponse.getStatusLine().getStatusCode(); 
    } else { 
      throw new NoConnectionError(e); 
    } 
    VolleyLog.e("Unexpected response code %d for %s", statusCode, request.getUrl()); 
    if (responseContents != null) { 
      networkResponse = new NetworkResponse(statusCode, responseContents, 
          responseHeaders, false, SystemClock.elapsedRealtime() - requestStart); 
      if (statusCode == HttpStatus.SC_UNAUTHORIZED || 
          statusCode == HttpStatus.SC_FORBIDDEN) { 
        attemptRetryOnException("auth", 
            request, new AuthFailureError(networkResponse)); 
      } else { 
        // TODO: Only throw ServerError for 5xx status codes. 
        throw new ServerError(networkResponse); 
      } 
    } else { 
      throw new NetworkError(networkResponse); 
    } 
  } 
} 

Then look at the request code of HttpStack:


public <T> Request<T> add(Request<T> request) { 
  // Tag the request as belonging to this queue and add it to the set of current requests. 
  request.setRequestQueue(this);  
  synchronized (mCurrentRequests) { 
    mCurrentRequests.add(request);  // Add to the current queue  
  } 
 
  // Process requests in the order they are added. 
  request.setSequence(getSequenceNumber()); 
  request.addMarker("add-to-queue"); // Set flag  
 
  // If the request is uncacheable, skip the cache queue and go straight to the network. 
  if (!request.shouldCache()) { // Depending on whether caching is needed or not   If caching is not required   Join the network task queue directly   Then return   If caching is required   Then add the cache queue in the following code   Caching is required by default   
    mNetworkQueue.add(request); 
    return request; 
  } 
 
  // Insert request into stage if there's already a request with the same cache key in flight. 
  synchronized (mWaitingRequests) { 
    String cacheKey = request.getCacheKey(); 
    if (mWaitingRequests.containsKey(cacheKey)) { // Determines whether the request currently being processed and can be cached contains the request key  If the inclusion description already exists 1 Two identical requests   Then join it  
      // There is already a request in flight. Queue up. 
      Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey); 
      if (stagedRequests == null) { 
        stagedRequests = new LinkedList<Request<?>>(); 
      } 
      stagedRequests.add(request); 
      mWaitingRequests.put(cacheKey, stagedRequests); 
      if (VolleyLog.DEBUG) { 
        VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey); 
      } 
    } else { // If you do not include   Join 1 An empty request to   Temporary queue   And then added to the cache queue   
      // Insert 'null' queue for this cacheKey, indicating there is now a request in 
      // flight. 
      mWaitingRequests.put(cacheKey, null);  
      mCacheQueue.add(request); 
    } 
    return request; 
  } 
} 
0

This function mainly uses HttpUrlConnection to add headers to add parameters in connection. addRequestProperty method. You need to get the stream and then write parameters. The following function is introduced. Assumption is post mode:


public <T> Request<T> add(Request<T> request) { 
  // Tag the request as belonging to this queue and add it to the set of current requests. 
  request.setRequestQueue(this);  
  synchronized (mCurrentRequests) { 
    mCurrentRequests.add(request);  // Add to the current queue  
  } 
 
  // Process requests in the order they are added. 
  request.setSequence(getSequenceNumber()); 
  request.addMarker("add-to-queue"); // Set flag  
 
  // If the request is uncacheable, skip the cache queue and go straight to the network. 
  if (!request.shouldCache()) { // Depending on whether caching is needed or not   If caching is not required   Join the network task queue directly   Then return   If caching is required   Then add the cache queue in the following code   Caching is required by default   
    mNetworkQueue.add(request); 
    return request; 
  } 
 
  // Insert request into stage if there's already a request with the same cache key in flight. 
  synchronized (mWaitingRequests) { 
    String cacheKey = request.getCacheKey(); 
    if (mWaitingRequests.containsKey(cacheKey)) { // Determines whether the request currently being processed and can be cached contains the request key  If the inclusion description already exists 1 Two identical requests   Then join it  
      // There is already a request in flight. Queue up. 
      Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey); 
      if (stagedRequests == null) { 
        stagedRequests = new LinkedList<Request<?>>(); 
      } 
      stagedRequests.add(request); 
      mWaitingRequests.put(cacheKey, stagedRequests); 
      if (VolleyLog.DEBUG) { 
        VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey); 
      } 
    } else { // If you do not include   Join 1 An empty request to   Temporary queue   And then added to the cache queue   
      // Insert 'null' queue for this cacheKey, indicating there is now a request in 
      // flight. 
      mWaitingRequests.put(cacheKey, null);  
      mCacheQueue.add(request); 
    } 
    return request; 
  } 
} 
1

public <T> Request<T> add(Request<T> request) { 
  // Tag the request as belonging to this queue and add it to the set of current requests. 
  request.setRequestQueue(this);  
  synchronized (mCurrentRequests) { 
    mCurrentRequests.add(request);  // Add to the current queue  
  } 
 
  // Process requests in the order they are added. 
  request.setSequence(getSequenceNumber()); 
  request.addMarker("add-to-queue"); // Set flag  
 
  // If the request is uncacheable, skip the cache queue and go straight to the network. 
  if (!request.shouldCache()) { // Depending on whether caching is needed or not   If caching is not required   Join the network task queue directly   Then return   If caching is required   Then add the cache queue in the following code   Caching is required by default   
    mNetworkQueue.add(request); 
    return request; 
  } 
 
  // Insert request into stage if there's already a request with the same cache key in flight. 
  synchronized (mWaitingRequests) { 
    String cacheKey = request.getCacheKey(); 
    if (mWaitingRequests.containsKey(cacheKey)) { // Determines whether the request currently being processed and can be cached contains the request key  If the inclusion description already exists 1 Two identical requests   Then join it  
      // There is already a request in flight. Queue up. 
      Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey); 
      if (stagedRequests == null) { 
        stagedRequests = new LinkedList<Request<?>>(); 
      } 
      stagedRequests.add(request); 
      mWaitingRequests.put(cacheKey, stagedRequests); 
      if (VolleyLog.DEBUG) { 
        VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey); 
      } 
    } else { // If you do not include   Join 1 An empty request to   Temporary queue   And then added to the cache queue   
      // Insert 'null' queue for this cacheKey, indicating there is now a request in 
      // flight. 
      mWaitingRequests.put(cacheKey, null);  
      mCacheQueue.add(request); 
    } 
    return request; 
  } 
} 
2

Write body to the stream to encapsulate the parameters in body


public byte[] getBody() throws AuthFailureError { 
    Map<String, String> params = getParams(); 
    if (params != null && params.size() > 0) { 
      return encodeParameters(params, getParamsEncoding()); 
    } 
    return null; 
  } 
getParams Method   Yes Request Need to be rewritten 1 Methods   The return value is the Map Set  
[java] view plain copy  In CODE View code slice derived from my code slice on 
private byte[] encodeParameters(Map<String, String> params, String paramsEncoding) { 
    StringBuilder encodedParams = new StringBuilder(); 
    try { 
      for (Map.Entry<String, String> entry : params.entrySet()) { 
        encodedParams.append(URLEncoder.encode(entry.getKey(), paramsEncoding)); 
        encodedParams.append('='); 
        encodedParams.append(URLEncoder.encode(entry.getValue(), paramsEncoding)); 
        encodedParams.append('&'); 
      } 
      return encodedParams.toString().getBytes(paramsEncoding); 
    } catch (UnsupportedEncodingException uee) { 
      throw new RuntimeException("Encoding not supported: " + paramsEncoding, uee); 
    } 
  } 

This function is to splice string parameters according to the rule of 1, and then you can submit parameters

Finally, introduce the main classes, members and their functions of this framework:

The queue used by RequestQueue to process requests, which are placed in this class. Call start method to start processing requests

The cache of mCache requests, which is placed in this cache when 1 request is submitted and this request needs to be cached

mNetwork is only used for submitting network requests. There is only one method for submitting requests, and one HttpStack needs to be passed in to complete the submission of requests

mDelivery is used for functions such as interface callback after request response

mDispatchers NetWork Scheduler Thread Array contains 4 objects to process requests for efficiency. The run method of this thread is called when there is no cache available or it has expired and needs to be refreshed. If there is no cache, it will block

The mCacheDispatcher cache scheduler thread processes the already cached request, and if it is not cached, puts the request into NetWorkQueue to wait for the call

The above is the whole content of this article, hoping to help you learn Android Volley framework.


Related articles: