# python realizes maximum subordered sum (divide and conquer + dynamic programming)

• 2021-07-10 20:00:20
• OfStack

Given an array of integers nums, find a contiguous subarray with the largest sum (the subarray contains at least one element) and return its largest sum.

Example:

Input: [-2, 1,-3, 4,-1, 2, 1,-5, 4],
Output: 6
Explanation: The sum of continuous subarrays [4,-1, 2, 1] is the largest, which is 6.

If you have implemented a solution with a complexity of O (n), try to use a more subtle divide-and-conquer method.

Thoughts:

First of all, we analyze the topic, and we think, why does the continuous subarray of the maximum sum contain these elements instead of other elements? Because if we want to extend the current contiguous subarray on the existing basis, the adjacent elements are 1 must be added, and the current sum may be detracted from the adjacent elements.

Thoughts 1:

Ergodic method, On:

Algorithm process: traverse the array and use onesum to maintain the sum of the current elements. When onesum is less than 0, we set it to 0. The global maximum is then updated each time.

``````
class Solution:
def maxSubArray(self, nums):
"""
:type nums: List[int]
:rtype: int
"""
#onesum Maintain the current and
onesum = 0
maxsum = nums[0]
for i in range(len(nums)):
onesum += nums[i]
maxsum = max(maxsum, onesum)
# Appear onesum<0 Is set to the case of 0 , re-accumulation and
if onesum < 0:
onesum = 0
return maxsum
``````

The algorithm proves that: 1. Start thinking that the array is empty, and consider that we choose one nums [i] to join onesum every time as a new element in the current array, that is, think from a dynamic perspective. The process is simple and the code is short, but why can this achieve results? The addition we do is in order, starting with the first one in the array.

When our i is selected, add onesum. There are two situations at this time

1) Assume that our onesum1 is directly greater than 0 and has never been < 0 over. That is to say, every time we calculate onesum is greater than 0, and every time we calculate onesum, it is a subsequence including the beginning element (the tail 1 changes directly with i). It seems that we have not considered all possible sequences, but in fact all possible sequences have already been considered. Let's briefly talk about the original text of po later.

a) A sequence that begins with the beginning of the current subsequence and ends at any one place in the middle. In this case, 1 is scanning directly, and 1 is saving updates directly, so don't be afraid of losing information.

b) A sequence that ends at the end of the current subsequence and begins at any one in the middle. In this case, the sum of 1 is smaller than the sequence beginning with the beginning and ending with the end of the current subsequence. Because the missing paragraph goes through our premise, it is also added to be greater than 0.

c) A sequence that begins and ends with an intermediate element. And less than a sequence that begins with the beginning of the current sub-sequence and ends with the end of this sub-sequence. Because the missing paragraph goes through our premise, it is also added to be greater than 0.

2) If it is less than 0, it means that the suborder we are currently forming is the first time that it is less than 0. At least now, we want the newly formed continuous array not to include the previous continuous suborder in the whole, because we added the previous continuous suborder to the previous continuous suborder < 0. But the question is, do we need to keep some? Should all arbitrary beginning suborders ending with the end of the current suborder be discarded? The answer is yes, because that 1 segment is also less than 0, because the sum of that 1 segment will be less than the sequence beginning with the current suborder and ending with the current suborder (see previous proof). So they are abandoned and a new suborder is started again.

Idea 2:

Dynamic programming On

Algorithm process:

Let sum [i] be the sum of the largest contiguous subarrays ending at the i-th element. Assuming that, for element i, the length of all subarrays ending in the preceding element has been evaluated, then the largest contiguous subarray ending in element i is actually either ending in element i-1 and adding this element to the largest contiguous subarray, or only containing element i, i.e. sum [i] = max (sum [i-1] + a [i], a [i]). The selection can be made by judging whether sum [i-1] + a [i] is greater than a [i], which is actually equivalent to judging whether sum [i-1] is greater than 0. Because each operation only needs the results of the previous one, it is not necessary to keep all the results of the previous one as in the ordinary dynamic programming, so the time and space complexity of the algorithm are very small

``````
class Solution:

def maxSubArray(self, nums):
"""
:type nums: List[int]
:rtype: int
"""
length=len(nums)
for i in range(1,length):
# The size of the current value is compared with the sum of the previous values. If the current value is larger, the current value is taken and the sum of the previous values is discarded
subMaxSum=max(nums[i]+nums[i-1],nums[i])
nums[i]=subMaxSum# Assign the current and maximum to nums[i] , new nums Stored as the sum value
return max(nums) ``````

Algorithm proof: I directly use the nums array in the topic data for the code of this problem, because it only needs to traverse once. If nums [i] means ending with the current i element (see clearly that 1 must include the current i), the maximum value is nothing more than to see if the suborder of the maximum sum ending with i-1 can be added with my nums [i], if nums [i] > 0, add. Note that I don't explicitly judge this in my code, but that's what my Max means, and then we define nums [i].

Thoughts 3:

Divide and conquer recursion

Algorithm process:

Divide and conquer, the maximum subordered sum is either in the left half, in the right half, or across two parts (that is, including the last element in the left half and the first element in the right half). Just return the maximum value of these three cases. The third case, which includes the last element in the left half, needs to be traversed one by one to update the maximum value. The first element containing the right half is similar. Total Time Complexity O (nlogn)

``````
class Solution(object):
def maxSubArray(self, nums):
# Principal function
left = 0
# Left and right boundary
right = len(nums)-1
# Find the maximum sum
maxSum = self.divide(nums,left,right)
return maxSum

def divide(self,nums,left,right):
# If only 1 Elements are returned
if left==right:
return nums[left]
# Establish a central point
center = (left+right)//2
# Find the sum of suborders to the left of the center point
leftMaxSum = self.divide(nums,left,center)
# Find the sum of suborders to the right of the center point
rightMaxSum = self.divide(nums,center+1,right)

# Seeking sub-order span 2 Sum of edges, divided into left boundary sum and right boundary sum
leftBorderSum = nums[center]
leftSum = nums[center]
for i in range(center-1,left-1,-1):
leftSum += nums[i]
if leftSum>leftBorderSum:
# Continuously update the maximum value of the left block
leftBorderSum = leftSum

rightBorderSum = nums[center+1]
rightSum = nums[center+1]
for i in range(center+2,right+1):
rightSum += nums[i]
if rightSum>rightBorderSum:
# Continuously update the maximum value of the right block
rightBorderSum = rightSum
# Sum of the left boundary  +  The one on the right and
BorderSum = leftBorderSum + rightBorderSum
return max(leftMaxSum,rightMaxSum,BorderSum)
``````

Algorithm proves:

On the whole, it is super clever. Cut the array continuously, and regard one array as three parts: left, middle and right. Actually, it's a bit like enumeration, but we used the idea of 2 points in enumeration and optimized it a lot. So enumerations can certainly achieve our goal, because we are constantly calculating the maximum sum of suborders with 1 including intermediate nodes.

Summary:

I wrote a lot today, so I don't have time to review. . . But the harvest is still great. For example, dynamic planning, if you don't decide 1, you must build an array and then return the last item of the array. Dynamic planning is actually very flexible. However, every item of dynamic programming should have a good meaning.

Divide and conquer recursion, in fact, helps us calculate all arrays, but optimizes them with the idea of 2 points.

Related articles: