Hello everyone again. In this module, we will talk about the dynamic programming. You surely have been faced with it when you implemented brief exams or the [inaudible]. Also, it turns out to be a part of a large number of algorithms. It's extremely important to learn the topic in small details. In this video, we will talk about some basic notions of the dynamic programming. Then in the following videos we will move to some specific examples. Let's start. Suppose that you're faced with a hard problem and you don't understand how to solve it. However, if someone gives you solutions of similar simpler problems, then using these solutions, you can easily solve the given problem [inaudible]. Then moving from simpler problems to harder sooner or later, you would achieve the initial problem and solve. Such an approach is called the dynamic programming. For example, in the case of the prefix sum, you needed to find the sum of numbers in a given array. The simplest way to do it is to iterate over all prefixes of the array and calculate the sum of numbers in each of them. Let's recall it in more details. For example, consider an array with numbers 1, 3, 4, 2, 3, 1, 2 and 5. For this array we are given, the prefix sum of the prefix of the length 1 is equal to 1. Then we found the sum of the prefix of the length 2 using the known values of prefix of length 1. We summarized 1 and 3 and got 4. Then we added the third number to the value of prefixes length 2 we got earlier and got this sum of the prefix of length and so 1. Here, our global aim was to calculate the sum of prefix I and we have achieved it using some of the prefix of length I minus 1. Pay your attention on the similarity of the dynamic programming as a mathematical induction. Recalls that to prove a statement using the induction, one proves the statement with the smallest parameters and then proves that if the statement is true with smaller parameters, then the statement is true with bigger parameters too. We will do something very similar to it to formalize the dynamic programming, one considers five basic terms. They are, first, state, second, base, third, transition Formula, fourth, order of iterating and fifth answer. Consider each of these notions in the depth. The state describes which problem is solved by the dynamic programming. It should contain some parameters of the problem and also gives some information about the answer we want to get. In many cases, the state can be extracted from the problem statement straight forwardly. For example, if we are to get the array of the prefix sums, then it's a good idea to store in S of I, the sum of numbers of the given array from first to I. Here I is the perimeter of the problem and S of I for I equal to 1, 2, et cetera L, are our states. Its not so easy in general. Sometimes the state is described by two or three numbers or even subsets. One harder case of the state will be discussed in the end of the module but we cannot simply solve the problems being based on simple. There should exist such problems for which we know the answer easily from the logic of the problem or the analysis of the cases but not from dynamics. These states are called base. In other words, the base is effect of the simplest states for which we get answers not from dynamics but from some mathematical ideas. Usually the size of the base depends on a transition formula. In case of prefix sums, the number S of one was a good base. It's obvious that the sum of elements of the area on the prefix of the length one is equal to the first element of the array. Also such a basis that consists of a prefixes of length one and two but it is excess because the second prefix sum can be calculated using the NX. Maybe good base is also the number S zeros, equal to sum of number on the prefix of length zero. This sum does not depend on the array and is always equal to zero. The transition formula is one of the most important part of the dynamics. In it the rules of transition from easy problems to hard one are described. In the simplest case, each next state depends only on the previous one, but it's not used so in many cases. Transitions may consist of two three or even more states. The example of the transition formula in the case of the prefix sums is the formula S of I is equal to S of I minus 1 plus IA of [inaudible]. As you can see from the formula for calculating the next sum, it is an [inaudible] previous sum. The base of the site is quite good for our goals. There exist many harder cases. For example, the P of I equal to maximum of the P of I minus 1 and the P of I minus 2 plus I of I. Here the value of the dynamic depends on two previous [inaudible]. The base of size one is not enough for us. We need the base of the size two. In the common case, the base is defined in such a way that we never need to consider values which are non-existing or non calculated earlier when we use the transition formula. Here you can see more examples of different transition formulas. As you can see, formulas may be various, so does the numbers of the parameters that dynamics depends on. Pay your attention on this formula. To calculate it, one should summarize [inaudible]. Note that it's usually easy to find the base having transition formulas. If you couldn't find the good order of performing the resistor exams, there is a special technique we will discuss later, but if you face with struggles with the formula, then you should almost surely modify the state itself. The order of iterating is in what order one should calculate the dynamics. We said earlier that we solve issue or problem using simpler problems, but it is not clear in some cases which problems are simpler and which problems are harder. We can see there's an order of iterating as a separate state. In most cases, it is enough to order the states by increasing or by decreasing of the parameter, S is a prefix function or by parameter [inaudible] if there are more than one parameter. Other orders occur in some really harder problems and we will not consider such cases in this course. Let's see what we should do if we have not managed to understand the order. A corresponding technique is simple and is called lazy dynamics. The core of the technique is to implement a function F, which gets parameters and calculate the corresponding dynamics [inaudible]. As we know the formula, we can implement it and use recursion to calculate the required solution of the problems with smaller parameters. For example, the function for the calculating prefix sounds may be S the one on the slide. In this function, we check it first whether we are in the base state or not. If so, then we're going towards zero. Otherwise, we calculate the dynamic straightforwardly. Unfortunately, statue quote will work slowly because each value is calculated a beak number of tags. We will discuss it later. Because of that, during each run of F, we will check whether we have calculated the corresponding value of dynamics earlier or not. One can do it using usual array and saves a value [inaudible]. Such a saving is called memorization. Last term is answer. In many cases the answer is start as the value of the state so it's enough to find, for example, is the last value of the dynamics. However, sometimes the goal of the dynamics is to calculate more than one number. For example, in the prefix sum, we were to calculate not only last number, but the whole array. It means that in this problem, the answer is the whole array S. There exist more difficult problems with answer as the minimal or maximal value amount calls the state. Some of the other sound waves, et cetera. A problem of such type will be considered is the end of the [inaudible]. In this lesson, we learned basic notions of the dynamic programming, base, formula, order of iterating and answer. The following video will be aimed at learning different examples of the dynamic programming. In the next video, we will consider a solution of the hopper problem. See you in next video.