Detailed Explanation of C Multithread Programming

  • 2021-11-13 17:56:03
  • OfStack

C # provides rich multithreaded operations, which brings great convenience to programming.

1. Reasons for using threads

1. Threads can be used to isolate code from other codes and improve the reliability of applications.

2. You can use threads to simplify coding.

3. You can use threads for concurrent execution.

2. Basic knowledge

1. Process and thread: As the basic unit of operating system executing program, process owns the resources of application program, process contains threads, process resources are shared by threads, and threads do not own resources.

2. Foreground threads and background threads: The newly created threads through Thread class default to foreground threads. When all foreground threads are closed, all background threads are terminated without throwing an exception.

3. Suspend (Suspend) and wake-up (Resume): Because the execution sequence of threads and the execution of programs are unpredictable, deadlock is easy to occur when using suspend and wake-up, which should be used as little as possible in practical application.

4. Blocking Thread: Join, which blocks the calling thread until it terminates.

5. Terminate the thread: Abort: Throw an ThreadAbortException exception to terminate the thread, and the terminated thread cannot wake up. Interrupt: Threads are terminated by throwing an ThreadInterruptException exception, and execution can continue by catching the exception.

6. Thread priority: AboveNormal BelowNormal Highest Lowest Normal, and the default is Normal.

3. Use of threads

Thread functions are passed by delegate and can be either without or with arguments (there can only be one argument), and can be encapsulated in a class or structure.


namespace Test
{
  class Program
  {
    static void Main(string[] args)
    {
      Thread t1 = new Thread(new ThreadStart(TestMethod));
      Thread t2 = new Thread(new ParameterizedThreadStart(TestMethod));
      t1.IsBackground = true;
      t2.IsBackground = true;
      t1.Start();
      t2.Start("hello");
      Console.ReadKey();
    }

    public static void TestMethod()
    {
      Console.WriteLine(" Thread function without parameters ");
    }

    public static void TestMethod(object data)
    {
      string datastr = data as string;
      Console.WriteLine(" Thread function with parameters, the parameters are: {0}", datastr);
    }
  } 
}

4. Thread pool

Because the creation and destruction of threads need to consume a certain amount of overhead, excessive use of threads will cause a waste of memory resources. For the sake of performance, the concept of thread pool is introduced. The thread pool maintains a request queue, and the code of the thread pool extracts tasks from the queue, and then delegates them to one thread of the thread pool for execution. The threads will not be destroyed immediately after execution, which can not only execute tasks in the background, but also reduce the overhead caused by thread creation and destruction.

Thread pool threads default to background threads (IsBackground).


namespace Test
{
  class Program
  {
    static void Main(string[] args)
    {
      // Add a work item to the thread pool queue, where you can pass 1 Thread parameters 
      ThreadPool.QueueUserWorkItem(TestMethod, "Hello");
      Console.ReadKey();
    }

    public static void TestMethod(object data)
    {
      string datastr = data as string;
      Console.WriteLine(datastr);
    }
  }
}

5. Class Task

It is very simple to initiate an asynchronous thread execution using QueueUserWorkItem () method of ThreadPool, but the biggest problem of this method is that there is no built-in mechanism to let you know when the operation is completed, and whether there is a built-in mechanism to get a return value after the operation is completed. To do this, you can use the Task class in System. Threading. Tasks.

Construct an Task < TResult > Object and pass the return type of 1 operation for the generic TResult parameter.


namespace Test
{
  class Program
  {
    static void Main(string[] args)
    {
      Task<Int32> t = new Task<Int32>(n => Sum((Int32)n), 1000);
      t.Start();
      t.Wait();
      Console.WriteLine(t.Result);
      Console.ReadKey();
    }

    private static Int32 Sum(Int32 n)
    {
      Int32 sum = 0;
      for (; n > 0; --n)
        checked{ sum += n;} // The result is too large, throw an exception 
      return sum;
    }
  }
}

When a task is completed, a new task is automatically started.

After one task is completed, it can start another task. The previous code is rewritten below without blocking any threads.


namespace Test
{
  class Program
  {
    static void Main(string[] args)
    {
      Task<Int32> t = new Task<Int32>(n => Sum((Int32)n), 1000);
      t.Start();
      //t.Wait();
      Task cwt = t.ContinueWith(task => Console.WriteLine("The result is {0}",t.Result));
      Console.ReadKey();
    }

    private static Int32 Sum(Int32 n)
    {
      Int32 sum = 0;
      for (; n > 0; --n)
        checked{ sum += n;} // Result overflow, throw exception 
      return sum;
    }
  }
}

6. Delegate asynchronous execution

Asynchronous invocation of delegates: BeginInvoke () and EndInvoke ()


namespace Test
{
  public delegate string MyDelegate(object data);
  class Program
  {
    static void Main(string[] args)
    {
      MyDelegate mydelegate = new MyDelegate(TestMethod);
      IAsyncResult result = mydelegate.BeginInvoke("Thread Param", TestCallback, "Callback Param");

      // Asynchronous execution completion 
      string resultstr = mydelegate.EndInvoke(result);
    }

    // Thread function 
    public static string TestMethod(object data)
    {
      string datastr = data as string;
      return datastr;
    }

    // Asynchronous callback function 
    public static void TestCallback(IAsyncResult data)
    {
      Console.WriteLine(data.AsyncState);
    }
  }
}

7. Thread synchronization

1) Atomic operation (Interlocked): All methods perform one atomic read or one write operation.

2) lock () statement: Avoid locking the public type, otherwise the instance will be out of code control, define an private object to lock.

3) Monitor realizes thread synchronization

Exclusive locks are acquired and released through Monitor. Enter () and Monitor. Exit (). After acquisition, resources are monopolized and other threads are not allowed to access them.

There is also an TryEnter method, which will not block waiting when the resource is not requested. You can set the timeout time and return false directly if you can't get it.

4) ReaderWriterLock

In order to improve the utilization rate of resources, the read operation lock is a shared lock, and multiple threads can read resources concurrently, while the write operation is an exclusive lock, and only one thread is allowed to operate.

5) The Event (Event) class implements synchronization

Event class has two states, terminated state and non-terminated state. When the terminated state is called WaitOne, it can request success, and the time state is set to terminated state through Set.

① AutoResetEvent (Automatic Reset Event)

② ManualResetEvent (manual reset event)

6) Semaphore (Semaphore)

Semaphore is an int variable maintained by kernel object. When it is 0, the thread is blocked, and when it is greater than 0, it is unblocked. When the waiting thread on one semaphore is unblocked, the semaphore count is +1.

The thread subtracts the semaphore by 1 through WaitOne and adds the semaphore by 1 through Release, which is simple to use.

7) Mutex (Mutex)

Exclusive resources, similar to Semaphore.

8) Cross-process synchronization

System-level synchronization can be realized by setting the name of the synchronization object, and different applications can identify different synchronization objects by the name of the synchronization object.


Related articles: