C language efficient programming a few tips

  • 2020-04-02 03:06:54
  • OfStack

Quote:

Writing efficient and concise C language code is the goal of many software engineers. This article on the work of some experience and experience to do relevant elaboration, wrong place please everyone to instruct.

The first move: space for time

The biggest contradiction in computer programs is the contradiction between space and time. So, starting from this point of view to reverse thinking to consider the efficiency of the program, we have the first way to solve the problem -- space for time.

For example, an assignment to a string.
Method A, the usual way:


#define LEN 32
char string1 [LEN];
memset (string1,0,LEN);
strcpy (string1, " This is a example!! "). ;

Method B:

const char string2[LEN] = " This is a example! " ;
char * cp;
cp = string2 ;

(you can use the pointer directly when using.)

As can be seen from the above example, the efficiency of A and B cannot be compared. In the same storage space, B can operate directly with A pointer, while A needs to call two character functions to complete. The disadvantage of B is that it is not as active as A. A is more flexible when the contents of A string need to be changed frequently. If you use method B, you need to prestore many strings, which takes up a lot of memory, but gives you the efficiency of program execution.

If the system has high real-time requirements and some memory, then I recommend you to use this trick.

A variation of this move - use macro functions instead of functions. Examples are as follows:

The method C:


#define bwMCDR2_ADDRESS 4
#define bsMCDR2_ADDRESS 17
int BIT_MASK(int __bf)
{
return ((1U << (bw ## __bf)) - 1) << (bs ## __bf);
}
void SET_BITS(int __dst, int __bf, int __val)
{
__dst = ((__dst) & ~(BIT_MASK(__bf))) | /
(((__val) << (bs ## __bf)) & (BIT_MASK(__bf))))
} SET_BITS(MCDR2, MCDR2_ADDRESS, RegisterNumber);

D:


#define bwMCDR2_ADDRESS 4
#define bsMCDR2_ADDRESS 17
#define bmMCDR2_ADDRESS BIT_MASK(MCDR2_ADDRESS)
#define BIT_MASK(__bf) (((1U << (bw ## __bf)) - 1) << (bs ## __bf))
#define SET_BITS(__dst, __bf, __val) /
((__dst) = ((__dst) & ~(BIT_MASK(__bf))) | /
(((__val) << (bs ## __bf)) & (BIT_MASK(__bf)))) SET_BITS(MCDR2, MCDR2_ADDRESS, RegisterNumber);

The difference between a function and a macro function is that the macro function takes up a lot of space, while the function takes up time. You should know that the function call is to use the system stack to save data, if the compiler has a stack check option, generally in the function header will be embedded in some assembly statements to check the current stack; At the same time, the CPU also needs to save and restore the current scene when the function is called, to do the stack and stack operation, so the function call requires some CPU time. Macro functions do not have this problem. Macro functions are simply embedded in the current program as pre-written code and do not generate function calls, so they only take up space, especially when the same macro function is called frequently.

D method is I see the best bit operation function, is a part of the ARM company source code, in the short three lines to achieve a lot of functions, almost covered all the bit operation function. C method is its variant, which taste needs to be carefully understood.

Step 2: solve the problem mathematically

Now we deduce the efficient C programming language to write the second move - the use of mathematical methods to solve the problem.

Mathematics is the mother of the computer, there is no mathematical basis and basis, there is no development of the computer, so in the writing of the program, the use of some mathematical methods will be an order of magnitude more efficient execution of the program.
For example, find the sum of 1 to 100.
Methods the E


int I , j;
for (I = 1 ;I<=100; I ++ ) {
j += I;
}

Methods F

int I;
I = (100 * (1+100)) / 2

This example is one of my most memorable math use cases, which was tested by my first computer teacher. I was in third grade at the time, but unfortunately I didn't know how to solve this problem by using the formula N times N plus 1 over 2. Method E cycled 100 times to solve the problem, which means at least 100 assignments, 100 judgments, and 200 addition (I and j) were used. Method F USES only one addition, one multiplication, and one division. The effect is self-evident. So now, when I'm programming, it's more about finding patterns and maximizing the power of math to make the program run more efficiently.

Step 3: use bitwise operations

The third way to achieve efficient C programming language is to use bit operations to reduce division and modulus.

In a computer program, the bit of data is the smallest unit of data that can be manipulated, and in theory "bit operation" can be used to complete all operations and operations. General bit operation is used to control the hardware, or to do data transformation, but flexible bit operation can effectively improve the efficiency of the program. Examples are as follows:
Methods the G


int I,J;
I = 257 /8;
J = 456 % 32;
methods H
int I,J;
I = 257 >>3;
J = 456 - (456 >> 4 << 4);

H is literally more trouble than G, but if you look closely at the resulting assembly code, you'll see that method G calls the basic module and division functions, both function calls, and a lot of assembly code and registers. Method H, on the other hand, is just a few related lines of assembly, and the code is simpler and more efficient. Of course, the difference in efficiency may not be large due to compiler differences, but in terms of the MS C and ARM C I have encountered so far, the difference in efficiency is not small. The relevant assembly code is not listed here.
One of the things to be aware of when you do this is the problems that arise with different cpus. For example, a program written on a PC and debugged on a PC may cause code problems when ported to a 16-bit machine platform. So you can only use it on a technical basis.

Step 4: assembly embedding

Efficient C language programming skills, the fourth - embedded assembly.

To those familiar with assembly language, programs written in C are garbage. This is extreme, but it makes sense. Assembly language is the most efficient computer language, but you can't write an operating system on it. Therefore, in order to achieve the efficiency of the program, we have to adopt alternative methods - embedded assembly, hybrid programming.

As an example, assign array 1 to array 2, requiring each byte to match.


char string1[1024],string2[1024];

Methods the I

int I;
for (I =0 ;I<1024;I++)
*(string2 + I) = *(string1 + I)

Methods the J

#ifdef _PC_
int I;
for (I =0 ;I<1024;I++)
*(string2 + I) = *(string1 + I);
#else
#ifdef _ARM_
__asm
{
MOV R0,string1
MOV R1,string2
MOV R2,#0
loop:
LDMIA R0!, [R3-R11]
STMIA R1!, [R3-R11]
ADD R2,R2,#8
CMP R2, #400
BNE loop
}
#endif

Method I is the most common method, using 1024 loops; Method J is differentiated according to different platforms. Under ARM platform, the same operation is completed with only 128 cycles of embedded assembly. Some of you will say, why not use the standard memory copy function? This is because the source data may contain bytes of data with a value of 0, in which case the standard library function will terminate prematurely and not complete the required operation. This routine is typically used for copying LCD data. According to the different CPU, skilled use of the corresponding embedded assembly, can greatly improve the efficiency of program execution.

Although it is a must, but if easy to use it will pay a heavy price. This is because, the use of embedded assembly, then limited the portability of the program, so that the program in the process of different platform transplantation, crouching tiger hidden dragon, dangerous! At the same time, this strategy is also contrary to the ideas of modern software engineering, and can only be adopted when forced to do so. Remember, remember.


Related articles: