2024-07-12
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
There are many ways and methods to complete the same algorithm problem. The final results are the same and correct, but the efficiency may vary.
Just like starting from the same place to reach another place, there are ways to get there: airplane, train, private car..., but the time and economy are different.
After an algorithm is written into an executable program, it consumes time resources and space (memory) resources when running. Therefore, the quality of an algorithm is generally measured from two dimensions, time and space, namely time complexity and space complexity.
Time complexity mainly measures how fast an algorithm runs, while space complexity mainly measures the additional space required for an algorithm to run.
In the early days of computer development, computer storage capacity was very small, so space complexity was of great concern. However, with the rapid development of the computer industry, computer storage capacity has reached a very high level. Therefore, we no longer need to pay special attention to the space complexity of an algorithm.
When I play some games, my phone will get very hot when playing some games, but not others. This is inseparable from the complexity. My phone can finish running simple algorithms very quickly, but when running complex algorithms, I have to enable full power to ensure that less time is used, which will cause it to get hot.
Definition: In computer science, the time complexity of an algorithm is a function T(N) that quantitatively describes the running time of the algorithm. Time complexity is a measure of the time efficiency of a program, so why not calculate the running time of the program?
- Because the program running time is related to the compilation environment and the configuration of the running machine. For example, the same algorithm program will take different running times on the same machine if it is compiled with an old compiler and a new compiler.
- The same algorithm program will take different amounts of time to run on an old low-configuration machine and a new high-configuration machine.
- Furthermore, time can only be tested after the program is written and cannot be calculated and evaluated through theoretical ideas before writing the program.
Therefore, when judging the time value of an algorithm, the time it takes to execute the algorithm is not taken into account, only the number of times the instructions are executed is considered.
Examples:
// 请计算⼀下Func1中++count语句总共执⾏了多少
次?
void Func1(int N)
{
int count = 0;
for (int i = 0; i < N ; ++ i)
{
for (int j = 0; j < N ; ++ j)
{
++count;
}
}
for (int k = 0; k < 2 * N ; ++ k)
{
++count;
}
int M = 10;
while (M--)
{
++count;
}
}
Through observation:
Number of basic operations performed by Func1:
T(N)=N2+ 2*N + 10
In practice, when we calculate time complexity, we do not calculate the exact number of executions of the program. It is still very troublesome to calculate the exact number of executions (different program codes have different numbers of compiled instructions). Calculating the exact number of executions is not very meaningful because we calculate time complexity just to compare the order of increase of the algorithm program, that is, the difference in T(N) when N keeps increasing. We have seen above that when N keeps increasing, the constants and low-order terms have little effect on the results, so we only need to calculate the approximate number of executions that the program can represent the order of increase. The complexity is usually expressed using the asymptotic representation of Big O.
Space complexity is also a mathematical expression, which refers to the additional space temporarily allocated during the operation of an algorithm due to the needs of the algorithm.
Space complexity is not how many bytes of space the program takes up, because under normal circumstances the size of each object will not differ greatly, so space complexity is calculated based on the number of variables.
The calculation rules of space complexity are basically similar to those of practical complexity, and also use the Big O asymptotic representation method.
Note: The stack space required for function execution (to store parameters, local variables, some register information, etc.) has been determined during compilation, so the space complexity is mainly determined by the additional space explicitly requested by the function when it is running.
Space is not a key consideration in current computer development, but excessive waste should be avoided, as too much waste will cause unnecessary problems.
Big O notation: A mathematical notation used to describe the asymptotic behavior of a function
The rules for extrapolating to Big O asymptotic notation are:
- 1. In the time complexity function T(N), only the highest order terms are retained and the lower order terms are removed, because when N continues to grow,
The influence of low-order terms on the results becomes smaller and smaller, and when N is infinite, they can be ignored.- 2. If the highest-order term exists and is not 1, remove the constant coefficient of this term, because as N increases, this coefficient
The impact on the result becomes smaller and smaller, and when N is infinite, it can be ignored.- 3. If there are no items related to N in T(N), but only constant items, replace all additive constants with the constant 1.
Worst case: Maximum number of runs for any input size (upper bound)
Average case: expected number of runs for any input size
Best case: minimum number of runs for any input size (lower bound)
In practice, the asymptotic representation of Big O generally focuses on the upper bound of the algorithm, that is, the worst operating condition.
void Func1(int N)
{
int count = 0;
for (int i = 0; i < N; ++i)
{
for (int j = 0; j < N; ++j)
{
++count;
}
}
for (int k = 0; k < 2 * N; ++k)
{
++count;
}
int M = 10;
while (M--)
{
++count;
}
}
time complexity:
If you omit the code that runs once, such as count=0, this code.
Keep the number of code executions that have a large impact on the running time
The first loop is executed twice, and the number of executions is N.N
The second loop is a layer of loops and is executed 2 timesN
The third cycle is 10 times
The sum of the execution times is T(N)=N2+2*N+10
Big O represents time complexity, discarding the small impact and taking the limit
O(N2)
Space complexity:
The size of the space requested by the Func1 function, such as int count = 0, requests the size of the space of type int, int M = 10, also requests the space of type int.
These two spaces are also used in the loop without applying for additional space.
The size of the application space is constant
The space complexity expressed in Big O is:
O(1)
void Func2(int N)
{
int count = 0;
for (int k = 0; k < 2 * N; ++k)
{
++count;
}
int M = 10;
while (M--)
{
++count;
}
printf("%dn", count);
}
time complexity:
The code runs more times in the first loop, which runs N*2 times.
The second loop runs 10 times
Using big O to express time complexity, the most influential one is 2*N, omitting the preceding coefficient term:
O(N)
void Func3(int N, int M)
{
int count = 0;
for (int k = 0; k < M; ++k)
{
++count;
}
for (int k = 0; k < N; ++
k)
{
++count;
}
printf("%dn", count);
}
time complexity
There are two unknowns, and the two cycles are more affected.
The first loop runs M times
The second loop runs N times
It should be because it is an unknown number and it is uncertain which one is bigger or smaller, so it is not omitted.
O(M+N)
void Func4(int N)
{
int count = 0;
for (int k = 0; k < 100; ++k)
{
++count;
}
printf("%dn", count);
}
time complexity:
The biggest impact here is a cycle
The number of loop runs is 100
is a fixed constant
O(1)
const char* strchr(const char* str, int character)
{
const char* p_begin = str;
while (*p_begin != character)
{
if(*p_begin == '0')
return NULL;
p_begin++;
}
return p_begin;
}
The function is to find the subscript of a character in an array.
The number of runs here is uncertain
It is possible to find F(N)=1 in one go
It is possible to find F(N)=N/2 in the middle
It may be found at the last moment, or NULL may be returned if it is not found, F(N)=N
Big O is the worst case scenario:
The time complexity is O(N)
void func5(int n)
{
int cnt = 1;
while (cnt < n)
{
cnt *= 2;
}
}
The number of times this is run depends on the size of n, but it does not loop n times, because the loop variable will be multiplied by 2 during the loop, and the loop will be jumped out quickly. After running the loop ten times, the value of the loop variable cnt will reach 1024.
Let the number of cycles be x, then 2x=n
x=logn
So the time complexity is: O(longn)
void BubbleSort(int* a, int n)
{
assert(a);
for (size_t end = n; end > 0; --end)
{
int exchange = 0;
for (size_t i = 1; i < end; ++i)
{
if (a[i - 1] > a[i])
{
Swap(&a[i - 1], &a[i]);
exchange = 1;
}
}
if(exchange == 0)
break;
}
}
This function completes the bubble ascending sort
There is a loop here, which is a two-layer nested loop
The number of loops is indeterminate
The worst case is that the outer loop runs n times
Then the inner loop will run (n-1)+(n-2)+(n-3)+…+2+1+0
is an arithmetic progression, and its sum is:
Time complexity: O(N)
Space complexity:
No additional space is requested, the requested space is constant
O(1)
long long Fac(size_t N)
{
if (0 == N)
return 1;
return Fac(N - 1) * N;
}
time complexity:
Recursed N times
So the time complexity is: O(N)
Space complexity:
Each recursion requires space to be allocated
Recursively applied for N space N times
The space complexity is: O(N)