# Insertion sort and random number generation for Go language sorting algorithm

• 2020-06-07 04:38:21
• OfStack

preface

Sorting, for every programming language is to face. Here we share some of the sorting algorithms implemented by golang and how to generate random Numbers. Without further ado, let's take a look at the details.

Classical sorting algorithm

The learning of algorithm is very important, which is an important criterion to test a programmer's level. Learning algorithms cannot be memorized by rote, but the ideas need to be understood so that they can be flexibly applied to practical development.

7 Classic sorting algorithms

Insertion sort Selection sort Bubble sort Hill sorting Merge sort Heap sort Quick sort

Insertion sort

Consider one question: for an array of length n, the first n-1 bits are all increasing in order. How do you sort them?

1. The group was iterated from the 1st to the n-1, and it was found that the n digit should be placed in the k digit

2. Move the Numbers from k to ES27en-1 one place back

3. An array of length n is then incremented in order

Specific implementation method:

``````
package main
import "fmt"

func insertionSort(arr []int) {
for i := 1; i < len(arr); i++ {
value := arr[i]

for j := i - 1; j >= 0; j-- {
if value < arr[j] {
arr[j+1], arr[j] = arr[j], value
} else {
break
}

}
}

}

func main() {
arr := []int{6, 5, 4, 3, 2, 1, 0}
insertionSort(arr)

fmt.Println("Sorted arr: ", arr)
}``````

Complexity:

Time complexity: O(n*n)

Space complexity: Extra space O(1)

O expressions (Big O notation) are commonly used in computer science to represent the complexity of an algorithm, including:

Time complexity: Measures the running time of an algorithm

Spatial complexity: Measures the space taken up by an algorithm, such as memory or hard disk

In general, the O expression represents the worst-case complexity.

The same is true for algorithm-based analysis: look for a number in n random Numbers. The best case is that the first number is O(1), and the last number is O(n), which is the worst case. From the perspective of probability, the average running time is n/2 times if the number is likely to appear at every position.

The average running time is the most meaningful of all cases because it is the expected running time. However, in reality, the average running time is difficult to obtain by analysis, and is generally estimated by running a certain amount of experimental data. And the worst running time is a guarantee that the running time is no longer bad. This is the most important requirement in an application, and in general, unless specifically specified, we refer to worst-case running time. That is, the time complexity is the worst-case time complexity.

The time complexity of common algorithms ranges from small to large:

``````
O(1)<O(log2n)<O(n)<O(n log2 n)<O(n^2)<O(n^3)<O(2^n)``````

O here is a flag for general complexity, like the function name of computational complexity 1.

Both types of complexity are one estimate,

The way to estimate is to analyze the formula for complexity based on the logic of the code.

In terms of time complexity, loops with variables are primarily recorded.

for (i = 0; i < n; i + +) {... } can be interpreted as O(n)

x = n + 1; y = x + 1; z = x + y; O(1) O(1) O(1)

In terms of spatial complexity, space applications with variables are mainly recorded.

Such as int [n] x; You can read it as O.

While int x; int y; int z; Although there are 3 variables, there is no change in the application operation, so it is understood as O(1).

The large O symbol is a mathematical symbol used to describe the asymptotic behavior of a function. You can either represent infinity asymptotically or you can represent infinity asymptotically

Infinitesimal asymptotics. It depends on whether you're using the error term in an algorithm or in describing the estimate of a mathematical function

Take a look at our insertion sort:

When arrays are in reverse order, the time complexity is O(n*n) When arrays are almost ordered, the time complexity is O(n)

In addition, the insertion sort of overhead is extremely small and can be interpreted as a constant equal to 1

In practice, constant is also a very important factor. Some algorithms have low complexity but high constant; Coupled with the characteristics of the data, it is sometimes inferior to the higher complexity but low constant algorithm.

In the process of understanding the insertion sort algorithm, one algorithm idea should be understood:

Break the problem down into subproblems Find the initial state of the problem From the initial state of the problem, through the subproblem, 1 step to get the final solution

In practical application, to select the algorithm flexibly, there are several key considerations:

Complexity: including time complexity, space complexity, constant, etc Implementation complexity: Algorithms are difficult to implement and can be a problem if they are not easy to test and maintain Applicability: Is there a more appropriate algorithm for a particular business scenario?

In general, it should be analyzed on a case-by-case basis and succinctly solve the problem while satisfying the business.

go generates interval random Numbers

``````
//  Function: Generate random Numbers
//  Summary:
//  Parameters:
//  min:  The minimum value
//  max:  The maximum
//  The return value:
//  int64:  Random number generated
func RandInt64(min, max int64) int64 {
if min >= max || min == 0 || max == 0 {
return max
}
return rand.Int63n(max-min) + min
} ``````

BAT lesson 2: Arrays and Sorts

conclusion

Related articles: