A detailed analysis of the working principle of Android Handler mechanism

  • 2021-11-02 02:14:26
  • OfStack

Write at the front

After writing Binder study notes last time, go to see the startup process of Activity again. Because you know the basic principle of Binder, you will have a deeper impression and the learning effect is much better than before. Originally, I planned to write the startup process of Activity directly, but I always felt that Handler also needed to write 1. After knowing the principles of Handler and Binder, I should look at the startup process of Activity, and there should be no problem. Although there have been many articles about Handler on the Internet, and the upper principle of Handler mechanism is not difficult, I decided to write 1, because I want to build my own knowledge system. I also hope to give my friends who read my blog a seamless reading experience.

The classes involved in Handler mechanism mainly include Handler, Message, Looper, MessageQueue, ThreadLocal and so on. Although we are most familiar with the Handler and Message classes, Looper did a few things for us before we could start using Handler.

The source code of this paper is based on android-28

Looper

Before using Handler, we must initialize Looper and make Looper run.


Looper.prepare();
...
Looper.loop();

After executing the above two statements, Looper can run. Let's take a look at the corresponding source code first:


public static void prepare() {
 prepare(true);
}

private static void prepare(boolean quitAllowed) {
 if (sThreadLocal.get() != null) {
 throw new RuntimeException("Only one Looper may be created per thread");
 }
 sThreadLocal.set(new Looper(quitAllowed));
}

private Looper(boolean quitAllowed) {
 mQueue = new MessageQueue(quitAllowed);
 mThread = Thread.currentThread();
}

You must ensure that there is only one Looper object in one thread, so when initializing Looper, you will check whether the current thread has Looper objects. Initialization of Looper creates 1 MessageQueue. After creating Looper, it will be put into ThreadLocal. As for ThreadLocal, we will talk about it later.


public static void loop() {
 //  Determines whether the current thread is initialized Looper
 final Looper me = myLooper();
 if (me == null) {
 throw new RuntimeException("No Looper; Looper.prepare() wasn't called on this thread.");
 }
 
 final MessageQueue queue = me.mQueue;

 ...

 for (;;) {
 Message msg = queue.next(); // might block
 if (msg == null) {
 // No message indicates that the message queue is quitting.
 return;
 }
 
 ...

 final long traceTag = me.mTraceTag;

 if (traceTag != 0 && Trace.isTagEnabled(traceTag)) {
 Trace.traceBegin(traceTag, msg.target.getTraceName(msg));
 }

 try {
 // target It refers to Handler
 msg.target.dispatchMessage(msg);
 } finally {
 if (traceTag != 0) {
 Trace.traceEnd(traceTag);
 }
 }
 
 ...

 msg.recycleUnchecked();
 }
}

Method is relatively long, so only the core code is released. One of the omitted codes is interesting: We can specify a threshold, say 200, and when the processing of Message exceeds 200ms, Log will be output. This can help us find some potential performance problems in development. Unfortunately, the method of setting the threshold is hidden and cannot be called directly, so the code will not be released here. Interested friends turn over the source code themselves.

The simplified code shows that the logic is 10 points simple. It can be said that Looper plays the role of brick mover. Message is taken out from MessageQueue, then handed over to Handler for distribution, and then taken out of Message from MessageQueue... endless, just like Yugong Yishan 1.

Seeing this, you should feel something is wrong more or less, because this is an infinite loop, which will occupy CPU resources directly, and the message will always be processed. Do you return Null from the message queue after processing, and then Looper ends? Obviously not, pay attention to the annotation might block.

MessageQueue

The answer is in MessageQueue. Look directly at next ():


Message next() {
 ...
 int pendingIdleHandlerCount = -1; // -1 only during first iteration
 int nextPollTimeoutMillis = 0;
 for (;;) {
 if (nextPollTimeoutMillis != 0) {
 Binder.flushPendingCommands();
 }

 nativePollOnce(ptr, nextPollTimeoutMillis);

 synchronized (this) {
 // Try to retrieve the next message. Return if found.
 final long now = SystemClock.uptimeMillis();
 Message prevMsg = null;
 Message msg = mMessages;
 if (msg != null && msg.target == null) {
 // Stalled by a barrier. Find the next asynchronous message in the queue.
 do {
 prevMsg = msg;
 msg = msg.next;
 } while (msg != null && !msg.isAsynchronous());
 }
 if (msg != null) {
 if (now < msg.when) {
 // Next message is not ready. Set a timeout to wake up when it is ready.
 nextPollTimeoutMillis = (int) Math.min(msg.when - now, Integer.MAX_VALUE);
 } else {
 // Got a message.
 mBlocked = false;
 if (prevMsg != null) {
 prevMsg.next = msg.next;
 } else {
 mMessages = msg.next;
 }
 msg.next = null;
 if (DEBUG) Log.v(TAG, "Returning message: " + msg);
 msg.markInUse();
 return msg;
 }
 } else {
 // No more messages.
 nextPollTimeoutMillis = -1;
 }

 // Process the quit message now that all pending messages have been handled.
 if (mQuitting) {
 dispose();
 return null;
 }

 // If first time idle, then get the number of idlers to run.
 // Idle handles only run if the queue is empty or if the first message
 // in the queue (possibly a barrier) is due to be handled in the future.
 if (pendingIdleHandlerCount < 0
 && (mMessages == null || now < mMessages.when)) {
 pendingIdleHandlerCount = mIdleHandlers.size();
 }
 if (pendingIdleHandlerCount <= 0) {
 // No idle handlers to run. Loop and wait some more.
 mBlocked = true;
 continue;
 }

 if (mPendingIdleHandlers == null) {
 mPendingIdleHandlers = new IdleHandler[Math.max(pendingIdleHandlerCount, 4)];
 }
 mPendingIdleHandlers = mIdleHandlers.toArray(mPendingIdleHandlers);
 }

 // Run the idle handlers.
 // We only ever reach this code block during the first iteration.
 for (int i = 0; i < pendingIdleHandlerCount; i++) {
 final IdleHandler idler = mPendingIdleHandlers[i];
 mPendingIdleHandlers[i] = null; // release the reference to the handler

 boolean keep = false;
 try {
 keep = idler.queueIdle();
 } catch (Throwable t) {
 Log.wtf(TAG, "IdleHandler threw exception", t);
 }

 if (!keep) {
 synchronized (this) {
 mIdleHandlers.remove(idler);
 }
 }
 }

 // Reset the idle handler count to 0 so we do not run them again.
 pendingIdleHandlerCount = 0;

 // While calling an idle handler, a new message could have been delivered
 // so go back and look again for a pending message without waiting.
 nextPollTimeoutMillis = 0;
 }
}

The code is a bit long, and this time I don't plan to omit 1, because there is still a small egg in it.

The most important thing in the method should be this line 1


nativePollOnce(ptr, nextPollTimeoutMillis);

Simply put, when nextPollTimeoutMillis = =-1, the current thread is suspended, the CPU resource is released, and when nextPollTimeoutMillis > = 0 will delay the specified time to activate the thread once, allowing the code to continue executing. The underlying pipe pipeline and epoll mechanism are involved here, so we will not go on (in fact, because we can't go on). This can also answer the above question, when there is no message, just let the thread hang, which can ensure that it does not occupy CPU resources and keep Looper dead loop.

Then let's look at how the news came out. There is one Message in MessageQueue, and another next member in Message class. It can be seen that Message is a single linked list structure. The order of messages is arranged in chronological order. Generally speaking, the Message we want to take is the first one (asynchronous messages are not considered here first, and we will talk about asynchronous messages later, and we have successfully dug a pit for ourselves haha). If the current time is greater than or equal to the time specified in Message, then take out the message and return it to Looper. Since the value of nextPollTimeoutMillis is 0 at this time, Looper comes to fetch the message again after the previous message is processed.

If the current time is less than the time specified in Message, set the nextPollTimeoutMillis value for the next wake-up. There is another one that has no message at present, and nextPollTimeoutMillis will be set to-1, that is, the thread will be suspended. Don't worry, it's not that fast yet. Keep looking down.

The next logic is to judge whether there is IdleHandler at present, if there is no continue, if there is IdleHandler, the delay will be delayed, and if there is IdleHandler, its queueIdle () method will be executed. What does this IdleHandler do? I should be able to guess 12 from the name, so I won't talk about it here. For 1 of its wonderful uses, see the delayed loading of Android startup optimization I wrote before. After executing the queueIdle () method, the nextPollTimeoutMillis is set to 0, and the message queue is re-checked for new messages under 1.

Handler

The above will take the message process is clear, everything is ready, just to add messages to the message queue, the most familiar Handler appeared. Handler adds messages to queues in two main ways:


Handler.sendXXX();
Handler.postXXX();

The first is mainly to send Message, and the second is Runnable. Either way, you end up with the enqueueMessage () method of MessageQueue.


boolean enqueueMessage(Message msg, long when) {
 ...

 synchronized (this) {
 ...

 msg.markInUse();
 msg.when = when;
 Message p = mMessages;
 boolean needWake;
 if (p == null || when == 0 || when < p.when) {
 // New head, wake up the event queue if blocked.
 msg.next = p;
 mMessages = msg;
 needWake = mBlocked;
 } else {
 // Inserted within the middle of the queue. Usually we don't have to wake
 // up the event queue unless there is a barrier at the head of the queue
 // and the message is the earliest asynchronous message in the queue.
 needWake = mBlocked && p.target == null && msg.isAsynchronous();
 Message prev;
 for (;;) {
 prev = p;
 p = p.next;
 if (p == null || when < p.when) {
 break;
 }
 if (needWake && p.isAsynchronous()) {
 needWake = false;
 }
 }
 msg.next = p; // invariant: p == prev.next
 prev.next = msg;
 }

 // We can assume mPtr != 0 because mQuitting is false.
 if (needWake) {
 nativeWake(mPtr);
 }
 }
 return true;
}

In general, when we send messages through Handler, we will get a boot time through SystemClock. uptimeMillis (), and then MessageQueue will sort Message according to this time. So there are two cases in the enqueueMessage () method, one of which can be inserted directly at the head of the queue. One is in the middle, which needs to traverse one, and then find a suitable pit to insert. when = = 0 corresponds to the sendMessageAtFrontOfQueue () and postAtFrontOfQueue () methods of Handler. The role of the needWake is to wake up the Looper thread as appropriate.

There is one point not mentioned above, that is, after Looper takes out Message from MessageQueue, it will be handed over to Handler for message distribution.


public void dispatchMessage(Message msg) {
 if (msg.callback != null) {
 handleCallback(msg);
 } else {
 if (mCallback != null) {
 if (mCallback.handleMessage(msg)) {
 return;
 }
 }
 handleMessage(msg);
 }
}

The priority order is callback, which comes with Message, followed by callback, which comes with Handler, and finally the callback handleMessage ().

ThreadLocal

Remember that there is an ThreadLocal in Looper, and I put it at the end because it can be taken out separately, and I don't want to interfere with the whole process.

ThreadLocal is a data storage class. The most amazing thing about it is that it is obviously the same ThreadLocal object, but different objects can be stored in different threads, such as "Hello" stored in thread A and "World" stored in thread B. They don't interfere with each other.

In the Handler mechanism, since one Looper corresponds to one thread, it is most appropriate to store Looper into ThreadLocal.

set () and get () methods are commonly used in ThreadLocal comparison. Let's see how it is realized separately.


public void set(T value) {
 Thread t = Thread.currentThread();
 ThreadLocalMap map = getMap(t);
 if (map != null)
 map.set(this, value);
 else
 createMap(t, value);
}

The first is to get ThreadLocalMap. If you can find it, set the value directly. If you can't find it, create one.


ThreadLocalMap getMap(Thread t) {
 return t.threadLocals;
}

See here, probably can understand. There is one ThreadLocalMap object in each thread Thread. Through the ThreadLocal. set () method, we actually get ThreadLocalMap in the current thread, and the obtained ThreadLocalMap is naturally different with different threads.
Let's take a look at the origin of this ThreadLocalMap. There is such a sentence in the comment of the class:

ThreadLocalMap is a customized hash map suitable only for maintaining thread local values.

As you can see from the comments, this is a custom HashMap, and its Entry class specifies that Key can only be of type ThreadLocal. So just think of it as an HashMap.

The get () method is easy to understand, just fetching the value from Map. Just look at it for about one time.


public static void prepare() {
 prepare(true);
}

private static void prepare(boolean quitAllowed) {
 if (sThreadLocal.get() != null) {
 throw new RuntimeException("Only one Looper may be created per thread");
 }
 sThreadLocal.set(new Looper(quitAllowed));
}

private Looper(boolean quitAllowed) {
 mQueue = new MessageQueue(quitAllowed);
 mThread = Thread.currentThread();
}
0

Write at the end

Although before I started writing, I thought the Handler mechanism was relatively simple, and it seemed unnecessary to write it, I still had to go deep into the details of the code when I really wanted to write it, and then I found that some places were not well understood before. Being able to understand and being able to write for others to understand are actually different levels.


Related articles: