- Видео 686
- Просмотров 952 718
MatheMagician
Добавлен 19 окт 2016
My name is Bernard Meulenbroek; I am a assistant professor at the mathematics department of Delft University of Technology. For the past ten years I have been teaching mathematics (calculus, linear algebra, complex analysis) to engineering students.
Many engineering students have three types of questions: why, how and what. `Why do I need to know all these mathematics?' `How do I need to do this computation?' `What does it ... mean in reality?' So in my (web)lectures I try to address all three questions.
I hope the videos will help you during your engineering study. If you have any questions, just ask on our forums. Mathematics is a lot more fun when you do not just understand the `How' part but also know the `what' and the `why' part.
Good luck and have fun!
Many engineering students have three types of questions: why, how and what. `Why do I need to know all these mathematics?' `How do I need to do this computation?' `What does it ... mean in reality?' So in my (web)lectures I try to address all three questions.
I hope the videos will help you during your engineering study. If you have any questions, just ask on our forums. Mathematics is a lot more fun when you do not just understand the `How' part but also know the `what' and the `why' part.
Good luck and have fun!
The inverse laplace transform and initial value problems - example two
Let us now apply the Laplace transform to a slightly more complicated initial value problem. First of all it is a nice example - we go through all the steps. And secondly this also shows the bottlenecks: what are the easy steps and what are the difficult steps in this procedure?
Просмотров: 102
Видео
The phase space - complex eigenvalues
Просмотров 317 месяцев назад
How does the phase space look if A has complex eigenvalues? In that case we first use a change of basis in order to get y'=Cy. Because in this new basis it is much easier to sketch the phase space. And once we found the phase space for y, we can revert to the original variables to find the phase space for x.
The Jacobian
Просмотров 527 месяцев назад
When we linearize a vector function we use the Jacobi matrix. The determinant of this matrix is called the Jacobian. This Jacobian is used when we make a change of variables in double or triple integrals. For example when we use polar coordinates or spherical coordinates. What is this Jacobi matrix and how do we compute the Jacobian? That is what you will learn in this video.
Taylor polynomials for functions of two variables
Просмотров 167 месяцев назад
Taylor polynomials are very useful to approximate functions around a certain point. You have already seen these polynomials for a function of one variable. Is this also possible for functions of more variables? Yes it is, as you will learn in this video. The expressions do become a bit more lengthy though, as you will see as well.
Solution strategy for x'=Ax, A diagonalizable
Просмотров 357 месяцев назад
How can we solve x'=Ax if A is diagonalizable? In a linear algebra course you may have learnt how to do this. In this video we will review the steps. These steps will give us an easy final expression for n independent solutions. We recommend to memorize this final expression; from now we will often use it, without doing all the Linear Algebra steps inbetween anymore, because this will allow us ...
The Wronskian for systems of differential equations
Просмотров 267 месяцев назад
If we have a few solutions of a homogeneous linear system, then any superposition will be a solution too. That is nice, of course, but how do we know that we have found all solutions of our problem. So when do we have the general solution? To answer that, we have the Wronskian - again. You may have encountered the Wronskian already when studying second order linear differential equations. So ho...
Superposition of solutions
Просмотров 157 месяцев назад
Linear homogeneous systems have the nice property of superposition. What does this means? If we have two solutions of our problem, we can add them up and then we still have a solution. This even holds for any linear combination. Why do we have this and where does the linearity and homogeneity come in? That is what you will see in this video.
Linear and homogeneous systems of differential equations
Просмотров 127 месяцев назад
The class of systems of differential equations is way too large. So we will first study a subset of systems on which we pose some restrictions. We will require linearity of our system and on top of that we require our system to be homogeneous. These restrictions allow us to make some analytical progress, which is good. And furthermore, the properties and solutions of linear and homogeneous will...
Conversion into a system of differential equations
Просмотров 207 месяцев назад
Why are systems of differential equations interesting? First of all many applications can be seen as a system of differential equations. As a first example we will see that this holds for a mass spring system. And secondly we can convert higher order equations to a system of differential equations. These systems are easier to analyze than the original higher order equations. And thirdly when so...
Step functions and shifting functions
Просмотров 337 месяцев назад
Step functions are not only used to step from one constant to another constant, but can be used much more general. We will encounter two possibilities in this video. We can use them to shift the entire function to the left or the right. This is how we encounter them in Laplace transforms and inverse Laplace transforms, so this application is most important for us. We can also use them to start ...
Partial fractions - example
Просмотров 538 месяцев назад
We can integrate any rational function, as long as we can factorize its denominator. We use a partial fraction decomposition to do so. But how does this work in practice? Let us take a look at a slightly more complicated example in this video.
Partial fractions - introduction
Просмотров 218 месяцев назад
Can we find the antiderivative of rational functions? Yes, we can, as long as we can factorize the denominator. But how do we do this? We use a technique called partial fraction decomposition. In this video you will learn about the ideas behind the method following an explicit example.
The Laplace transform and initial value problems - example one
Просмотров 378 месяцев назад
The Laplace transform of f', f'' and so on can easily be expressed in terms of the Laplace transform of f. This means that we can get rid of all derivatives in an equation if we take its Laplace transform. Doing so, a differential equation is converted to an algebraic equation. And solving an algebraic equation is much easier than solving a differential equation, so this is a popular method to ...
The factorial - introduction
Просмотров 78 месяцев назад
In high school you may have encountered the factorial, for example when doing some combinatorics. This notation however occurs more often, for example in sequences and series. So in this video we will briefly recap what this factorial is and we will derive some useful formulas involving the factorial that we need later on.
Improper integrals of type one
Просмотров 178 месяцев назад
We know how to compute definite integrals from a to b, but what happens if one of the boundaries is infinite? In that case we have a so called improper integral, an improper integral of type one to be precise. How can we handle this? That is what you will learn in this video.
Fourier series of even and odd functions
Просмотров 208 месяцев назад
Fourier series of even and odd functions
Integration of even and odd functions
Просмотров 298 месяцев назад
Integration of even and odd functions
Solving PDEs using separation of variables
Просмотров 1209 месяцев назад
Solving PDEs using separation of variables
The substitution rule - introduction
Просмотров 259 месяцев назад
The substitution rule - introduction
Solving quadratic equations - completing the square
Просмотров 229 месяцев назад
Solving quadratic equations - completing the square
for the proof at the end, are you assuming that c1v1 + ... + cpvp = 0 (i.e. assuming the vectors are independant beforehand) because how else would you get the result that cp = 0
Hi abhiraja, thank you for your question. Indeed, in order to show that p vectors are indepedent, we have to look at the equation c1v1+...+cpvp= 0 and we have to show that the only solution of this vector equation is c1=0, c2=0,c3=0...cp=0. This is by definition what indepedence means. In this case we do this as follows; first we multiply with (A-lambda I)^{p-1} and this yields cp (A-lambda I)^{p-1} vp =0 , hence cp=0. Then we can repeat this argument with (A-lambda I)^{-p-2} and so on to show that all the other coefficients are zero. So the only solution of the vector equation c1v1+...cpvp = 0 is c1=c2=..cp=0 which shows that the vectors v1..vp are independent. Hope this helps a bit, good luck!
Thank you
Thanks for your useful video. Never give up
You look like Outdoor Boys :)
😮
Nais vidio! Sanc u beri moch. (Zorry phor bad inglis)
V2=e1 u choosef it right?
Hi Ayishas804, thank you for your question. This is for a general matrix A, so in general you will not have v2 = e1. In the next videos in this playlist you can find an explicit example how you can find these generalized eigenvectors. Hope this helps.
@MatheMagician thank you sir for your reply,I am from India ❤️
Helloooo
This is the cleanest explanation of this i've ever seen!
Should be seen whatever explain
How did you get U3 as [0,0,1]?
why not compute the JCF ?
One of the most amazing explanations I've seen of this topic.
thank you so much
No worries!
Very brief yet straightforward explanation. I needed that 🙌🙌
Thank you!
Thank you! I suggest you try another microphone tho, great video regardless
Sir please improve your speaking accent I'm not able to understand or its better if you speak louder
Brother this is from 7 years ago 😭 why bother telling him. Also it’s not difficult to understand just focus. Or move onto another video
Thanks man!
thanks, I was hard stuck in the same question for two days and your video helped me figure it out. Thanks a lot!
Very good thanks.
I needed this. Thanks.
Are you really speaking English in this video? 😂
great explanation!
Amazing yet underrated video! Thank you so much. One suggestion: Can you increase the volume of your videos? I think its why you are as famous as you deserve to be. I hope you will be popular cuz your vids are so helpful!
Hi plankalkulcompiler, thank you for your kind reaction. On the older videos the volume is indeed too low. In the meantime we have improved this, so I hope this is better now in the more recent videos.
You should have mentioned around 1:00 that the singular values are the square roots of the non zero eigenvalues, otherwise it appears like you computed it out of thin air
Hi dsgarden, thank you for your remarkt. You are right, I now added a flag to another video that explains how you can find the singular values; hope this helps!
Perfect explanation thanks
very helpful (thank god subtitles exist)
❤❤
how did you get e3 ?
Hi adithkiranmumar, thanks for your question. We are looking for a third vector orthogonal to u1 and u2. In order to find one we can start with any vector in R3 that is not in Span{u1,u2}. So I just picked an easy vector (e3). Hope this helps.
U had all the time to went to calculation but yet you are just assuming the one who will watch this is a professor and wouldn't mind high level small talk with his pal
Thank you, our dinasour professor do not explain anything properly and thinks that people pass his class because he explained and they were the minority who listened lol
thanks for posting just wish the audio was better
I surprised by zero like of a mathematician
Why did we find u3 when finding U in B? A bit confused. but didnt find a u3 when we were solving for A?
Hi, thank you for your question. I am not sure I understand exactly what you mean, but let me try to help. In this video we are actually solving two problems. First the SVD for A, which requires us to find U. And then we determine the SVD of another matrix B, which requires us to find another matrix U. Was this the problem?
@@MatheMagician I did not understand why we needed to find u3 for The Matrix B. I tried to test my answer with U(2x2) and it was perfectly true.
@@alibkhash Thank you for your question. Then you might be doing some other type of decomposition. In the SVD we explicitly impose the sizes of the matrices: we want U and V to be orthogonal matrices (hence square matrices), which means that we need U 3 by 3, V 2 by 2 and Sigma 3 by 2 in order to get B=USigmaV^T. You can generalize this idea to non-square U and V, but that is not what we did in this example. Hope this helps!
2.17 bottom row: 1/(c-s)² = 1/(s-c)² ? how is this possible? s-c = c-s --> c = s? Or is this a result of c being an arbitrary constant?
Hi Leon, thank you for your question. No, it is easier, it is becaucse (-1)^2=1: 1/(c-s)^2= 1/(-(-c+s))^2=1/(-1)^2(s-c)^2=1/(s-c)^2. Hope this helps.
@@MatheMagician Ahh, of course. Thank you
For this example, could you have changed the principle branch to go along the non-positive real axis? Then you could avoid the singularity.
Hi OleJoe, thank you for your question. The branch cut has to connect 0 and infinity, the two branch points, so we do have a choice here. We want the positive real axis in our contour, because eventually we have to have the integral from 0 to infinity. We then also need to close the contour in some way, so you could close along the positive y-axis, but then you get some other integral along L2 (positive y axis instead of negative real axis) that we might not know. So maybe this works, but it will not make it easier as far as I can see. Hope this helps, good luck!
very helpful
Not be rude or anything, but it's quite hard to hear you
not true
The actual lecture quality was good, but the audio itself could use improvement
Thanks a lot!
Thank you so much
You explain so good ❤
This is nice
Thanks a lot!
till today , no one have made such good video like you SIR. really hatsoff , u covered all corner cases in SVD
🙏
Thank-you so much❤ I was not able to understand concept of generalized eigenvectors before this video
But this contradicts the Theorem which you taught us in previous lecture which says that " The total number of vector in a cycle is equal to its Algebraic Multiplicity". Here in one cycle we have 2 vectors while in other we have only 1 vector. Please clarify this sir.
Hi AnmolKumar, thank you for your question. It should be: the total number of vectors in all cycles equals its algebraic multiplicity. So in this case we have 2 + 1 = 3 vectors in total, which indeed equals the algebraic multiplicity. I will check whether this is correct/clear on the other videos . Thank you for pointing this out and I hope this answers your question. Good luck!
Thank you so much, sir.
Nice video Sir 😊
This is a gem. Thank you.