1. On page 112, in line 11 after “is a Cauchy sequence” there is a comma not a dot.

2. On page 113, in Example 2.4 in the solution there is integral *YdX* and not *XdY*. In Definition 2.5 it is not explicitly clear what is the topology in which we assume the convergence of the integral. At that point we are integrating only functions so the convergence is obvious. But later we shall integrate processes. In this case the topology is not obvious. From the text it is quite clear that in the later case the convergence means convergence in probability, but there is no explicit definition for it. Perhaps the best place to do it is before the remark before Theorem 2.21. Of course almost sure convergence implies convergence in probability. Starting with the classical analysis, where we get both convergence everywhere and arbitrary choice of test points we arrived to the case where the test points are fixed and the convergence holds only in probability. The point is that if we want to integrate with approximating sums with respect to functions with finite variation and we do not want to assume the continuity of the integrands, we should already fix the test points. If we also want to integrate with respect to martingales we should go for convergence in probability as well.

3. On page 114, in Lemma 2.7 perhaps it is better to say explicitly that the trajectories of *Y* are piecewise constant with finite number of jumps.

4. On page 116. in the first formula the parenthesis are messed up. The correct formula is

5. On page 115. there are the definitions

As these type of notations appear many times in the book perhaps some explanation is useful. Of course

has a clear meaning. *Y* is a process and as it has regular trajectories the formula above is just the process which is just the jumps which are bigger than the constant *c*. One can think about the jumps of a process as trees and bushes and grass etc in a garden. The garden is a two dimensional object, one dimension for the time and one for the random parameter. Then one deletes all the “small” jumps, so what left is just the trees.

Then the junk is coming: The operator **summation** is perhaps an unclear short notation, used perhaps unwisely for many times, for

The summation operator maps processes to processes. The operator is just “integrating” with respect to the time parameter. Therefore *Z* is the process where the “big jumps” were removed. Of course when one removes the first big jump then the trajectory after the jump is shifted with the size of the jump. After the second jump, the trajectories are shifted with the sum of the first two jumps, etc.

One can also notice that in the case of just regular trajectoris the notation is a bit misleading. In this case if the limit of the sum is *t* and there is a jump at *t* then at *t* one adds only the left-jump. After *t* one also adds the right-jump, that is one handles the jumps as a functions of time and one adds functions not just the numbers. (The size of the “jump function” is the modulus of discontinuity at the point of jump.)

6. On page 117. in the second part of the proof of Fisk’s theorem *M* is the stopped martingale, and not just the original local martingale, so (2.6) is correct. One can of course explicitely use the stopped local martingale in second part of the proof as well, but perhaps it is easier to denote the stopped process with *M* again.

7. In the proof of Proposition 2.15 the construction of the stopping time is very similar to the construction in the continuous case on page 121. One should look for the “first” point where the modulus goes *strictly* above c. If the trajectories are left- or right-regular then the “modulus of continuity” is also left- or right-regular so the modulus process is a progressively measurable process so by Theorem 1.28 it is clear that the hitting time is a stopping time. But we want to avoid the use of this theorem and we are looking for some minimal conditions on the filtration as we want to understand the role of the usual conditions. In these cases as the modulus is right- or left-regular we can in fact use Example 1.32 case 4. So if we assume the right-continuity of the filtration we get a stopping time. The reason why on page 122. in the continuous case one needs to use relation >= as for the continuous case one wants to use case 1. of Example 1.32. Of course one should think a bit why in fact we have that the “modulus of continuity” is smaller or equal than *2c* on the **CLOSED** interval up to the stopping time. Of course the modulus is smaller or equal than c on the open interval. As it is possible that we can have a jump at the time of stopping and the jumps are smaller than c, so the maximal modulus is smaller than *2c* on the closed interval. Still one can wander what happens when the trajectories are just regular. In this case the hitting time will be not necessarily a stopping time. But one can first proof the existence of the integral for the integrand, where one defines the integrand at the jumps with e.g. the left-limits. That is one should calculate

Then one should prove that the value of the integral is independent from the value of the jumps that is

(There is just finite number of big jumps and the integral of big jumps is zero, and the integral of the small jumps can be arbitrary small if the small, that is *c* in the definition of the big jumps, is very small.)

8. On page 125. the first part of the theorem is obvious as the almost sure convergence implies convergence in probability and the addition is continuous in the convergence in probability. Of course the proof of the second part is just hinted. Of course the question is that what is the integral of just the jumps? That is what is the integral of the difference if one subtracts from the original integrand e.g. the left-regularized version, which defined at every point of discontinuity with the left limit of the original integrand. For any c the number of jumps which are bigger than c are still finite, so their integral is zero (!!! see Example 2.8.) and if c is sufficiently small then the integral of the small jumps can be arbitrary small. (See Lemma 2.13.) The last sentence of the proof refers to Proposition 2.15 as of course there is no Theorem 2.15.

9. On page 126, in section 2.1.5 we implicitly assume that the integrand *Y* is regular. Otherwise the integral is not defined. In the definition of the approximating integral processes it is unnecessary to take the minimum of the test points and the time parameter as this minimum is never “active”. It is also unnecessary to take the right-limit of the integrand. The correct formula is simply

It is also unnecessary to refer in the subsection to the right-continuity of the filtration. But the filtration should contain the measure zero sets as it remarked at the end of the subsection.

10. On page 128. formula (2.11) is

Also in the second part of the proposition *X* should be just continuous and of course square-integrable but not bounded. The boundedness of the integrator will be used only later in Proposition 2.37, which is the main application of Proposition 2.24. But in Proposition 2.37 the integrator is bounded just because the integrand and the integrator are the same. As bounded martingales are square-integrable the remark is of course correct but perhaps misleading as introduces the later important boundedness without motivation.

11. On page 130. the operator delta is obviously just the increment of the process. Unfortunately the operator delta has at least three different meanings in the book. Most of the time it denotes the jumps of a process, sometimes, as here, it denotes the increments of a process, but also occasionally it is the Laplace operator. Hopefully it is always clear from the text what is the actual meaning of the symbol.

12. On page 131. one should refer to (2.13) and not to (2.8).

13. In the proof of the Kunita-Watanabe inequality we have faced the problem that the quadratic variation is defined only up to modification. Even fixing this problem later the trajectories of the quadratic variation are defined almost surely. The inequality is a direct generalization of the Cauchy-Schwarz inequality but in any proof one needs some extra simple measure theoretic device. With simple calculation we have shown the inequality for piecewise constant functions. As it is not directly clear that the set of functions satisfying the inequality is a linear space one cannot directly apply the Monotone Class Theorem. Hence one needs a more tricky density argument. In the proof we have used that every Borel-measurable function is an almost sure limit of piecewise constant functions. One can also use the Radon-Nikodym theorem.Download

14. On page 137. in Corollary 2.35. the correct formula is

15. On page 143. in the proof of Proposition 2.45 the Optional Sampling Theorem is used. The advantage of this proof is that one should not refer to the approximating sums again. But from the proof one should think that the hard part of the stopping rule is the relation

In fact not. If one is ready to go back to the definition of the quadratic variation one can easily and directly show this identity. Download

16. On page 154. on the top of the page the *d* is missing in the integrals.

17. On page 155. in Example 2.62 tacitly assumed that *X* is progressively measurable.

18. On page 164. in footnote 59, instead of line 1.17 one should refer to line 1.18.

19. On page 165. in Lemma 2.75 we do not know anything about the stopping times, they can accumulate and therefore *Y* is not necessarily regular, so we do not know its integrability. That is why one needs Corollary 2.76.

20. On page 166. in the proof of Proposition 2.77 it is perhaps better to note that *K* is predictable.