The Code Boss

Projects and Quirks

The Constraint in Variation of Parameters

May 24, 2014, 12:56 a.m. | By Michael Oliver

If you've ever taken a course in Differential Equations (DEs), you've most likely seen Variation of Parameters. If not, it's a useful method of finding a particular solution for a second-order DE if you already have two linearly independent solutions (it also extends to higher-order cases, but for simplicity I won't generalize and will stick to the second-order case).

The point of this article is to deal with a concerning constraint that is used in the method. I will point it out, and by getting rid of the constraint we'll find a very interesting and fundamental property of DEs. To follow along, you won't need to know much about differential equations, however you will need to have some Calculus and Linear Algebra background. I will explain things clearly as I go along.

Variation of Parameters

To motivate, let's start by looking at the method.

Suppose we are given a second-order differential equation in standard form:

$$y''(x) + P(x)y'(x) + Q(x)y(x) = R(x)$$

We are also given two linearly independent solutions of the homogeneous DE (that is, \(R(x) = 0\)), \(y_1(x)\) and \(y_2(x)\). What we want to find is \(y_p(x)\), a particular solution to our DE.

We can postulate that such a solution can be written in the form:

$$y_p(x) = y_1(x)v_1(x) + y_2(x)v_2(x)$$

Notice that this step does not impose a restriction on \(y_p(x)\), since we could easily choose \(v_1, v_2\) so that \(y_1, y_2\) are kept or eliminated from \(y_p\). Next, we differentiate this (and I will now start removing the \((x)\)'s, and refer to \(y_p\) simply as \(y\) for conciseness):

$$y' = y_1v_1' + y_1'v_1 + y_2v_2' + y_2'v_2$$

This leads us to the important constraint that this article is about. In the Variation of Parameters method, we now assume that the following constraint holds:

$$y_1v_1' + y_2v_2' = 0$$

In my opinion, this constraint is a strong one. We are limiting our particular solution only to a function that satisfies the constraint. What if there are other solutions somewhere? Why would we use this constraint? Anyone who uses this constraint should be asking questions, and it seems that usually we just ignore it because the method works.

I will continue deriving the method with the constraint, however we will return here, and see what happens if we do not constrain the solution.

With the constraint, we now get:

$$y' = y_1'v_1 + y_2'v_2$$

$$y'' = y_1''v_1 + y_1'v_1' + y_2''v_2 + y_2'v_2'$$

We now substitute both derivatives and \(y\) back into the DE:

$$\Big[y_1''v_1 + y_1'v_1' + y_2''v_2 + y_2'v_2'\Big] + P\Big[y_1'v_1 + y_2'v_2\Big] + Q\Big[y_1v_1 + y_2v_2\Big] = R$$

Notice that by rearranging the terms, we get:

$$(y_1'' + Py_1' + Qy_1)v_1 + (y_2'' + Py_2' + Qy_2)v_2 + y_1'v_1' + y_2'v_2' = R$$

But we know that \(y_1, y_2\) are solutions to the homogeneous equation! Hence the first two terms disappear and we are left with,

$$y_1'v_1' + y_2'v_2' = R$$

and if we consider our constraint from earlier:

$$y_1v_1' + y_2v_2' = 0$$

Notice that we have a system of first-order DEs. We can write it as:

$$\left[ \begin{array}{cc}y_1 & y_2 \\ y_1' & y_2' \end{array} \right] \left[ \begin{array}{c}v_1' \\ v_2' \end{array} \right] = \left[ \begin{array}{c}0 \\ R \end{array} \right]$$

We assumed that \(y_1\) and \(y_2\) are linearly independent, and hence the determinant of the matrix should be nonzero. We call this determinant the Wronskian:

$$W[y_1, y_2](x) = y_1y_2' - y_1'y_2$$

It follows that the matrix is invertible, and so multiplying by the inverse we get:

$$\left[ \begin{array}{c}v_1' \\ v_2' \end{array} \right] = \frac{1}{W} \left[ \begin{array}{cc}y_2' & -y_2 \\ -y_1' & y_1 \end{array} \right] \left[ \begin{array}{c}0 \\ R \end{array} \right]$$

$$\left[ \begin{array}{c}v_1' \\ v_2' \end{array} \right] = \frac{1}{W} \left[ \begin{array}{c}-y_2 R \\ y_1 R \end{array} \right]$$

But notice that we can now just integrate the system, yielding:

$$v_1 = \int \frac{-y_2 R}{W} dx$$

$$v_2 = \int \frac{y_1 R}{W} dx$$

Now we can substitute all of this back into our equation for \(y_p(x)\), yielding the particular solution:

$$y_p(x) = y_1(x) \int \frac{-y_2(x) R(x)}{W[y_1, y_2](x)} dx + y_2 \int \frac{y_1(x) R(x)}{W[y_1, y_2](x)} dx$$

This is a very important result; it means that we can find a particular solution for any differential equation, assuming we are able to solve the integrals above. But numerically, there really is no limitation. ...Or is there?

Modifying The Constraint

Let's rewind in the derivation above, and remove the constraint we had specified. Instead of setting:

$$y_1v_1' + y_2v_2' = 0$$

We will now set it to an arbitrary function \(f(x)\) (that is, we really don't put any constraint; it can be any function):

$$y_1v_1' + y_2v_2' = f(x)$$

This means we need to re-calculate our derivatives from earlier:

$$y' = y_1'v_1 + y_2'v_2 + f(x)$$

$$y'' = y_1''v_1 + y_1'v_1' + y_2''v_2 + y_2'v_2' + f'(x)$$

Again we substitute everything back into the DE to get:

$$\Big[y_1''v_1 + y_1'v_1' + y_2''v_2 + y_2'v_2' + f'(x)\Big] + P\Big[y_1'v_1 + y_2'v_2 + f(x)\Big] + Q\Big[y_1v_1 + y_2v_2\Big] = R$$

Again, we can re-arrange the terms to get:

$$(y_1'' + Py_1' + Qy_1)v_1 + (y_2'' + Py_2' + Qy_2)v_2 + y_1'v_1' + y_2'v_2' + f'(x) + Pf(x) = R$$

Cancelling out the homogeneous solutions, we are left with:

$$y_1'v_1' + y_2'v_2' + f'(x) + Pf(x) = R$$

$$y_1'v_1' + y_2'v_2' = R - f'(x) - Pf(x)$$

Using this line and our constraint from earlier, we get the system:

$$\left[ \begin{array}{cc}y_1 & y_2 \\ y_1' & y_2' \end{array} \right] \left[ \begin{array}{c}v_1' \\ v_2' \end{array} \right] = \left[ \begin{array}{c}f(x) \\ R - f'(x) - Pf(x) \end{array} \right]$$

We now multiply again by the inverse matrix, yielding:

$$\left[ \begin{array}{c}v_1' \\ v_2' \end{array} \right] = \frac{1}{W} \left[ \begin{array}{cc}y_2' & -y_2 \\ -y_1' & y_1 \end{array} \right] \left[ \begin{array}{c}f(x) \\ R - f'(x) - Pf(x) \end{array} \right]$$

$$\left[ \begin{array}{c}v_1' \\ v_2' \end{array} \right] = \frac{1}{W} \left[ \begin{array}{c}y_2'f - y_2R + y_2Pf + y_2f'\\ -y_1'f + y_1R - y_1Pf - y_1f' \end{array} \right]$$

Integrating the linear system, we get:

$$v_1 = \int \frac{y_2'f - y_2R + y_2Pf + y_2f'}{W} dx$$

$$v_2 = \int \frac{-y_1'f + y_1R - y_1Pf - y_1f'}{W} dx$$

We've done it! We now have a new form of the Variation of Parameters method - we can choose any arbitrary function $f$ that we like, making the above integrals convenient to compute. The result will then yield a particular solution to our DE.

But what if we can do more. (Feel free to jump to the end of the article if the math coming up looks a bit too hairy)

A Step Beyond the Constraint

Let's consider \(v_1\) and rewrite it in the following way:

$$v_1 = \int \frac{-y_2R}{W} dx + \int \frac{y_2'f + y_2f'}{W} dx + \int \frac{y_2Pf}{W} dx$$

Looking at the middle term, we can see that it is the result of applying the Product Rule. Thus:

$$\int \frac{y_2'f + y_2f'}{W} dx = \int \left[ \frac{1}{W} \frac{d}{dx} \Big( y_2 f \Big) \right] dx$$

Now we apply Integration By Parts. I promise, I'll explain why we're doing all this pretty soon. Let \(u = \frac{1}{W}\) and \(dv = \frac{d}{dx}(y_2 f)\). Then:

$$v = y_2 f$$

$$du = \frac{d}{dx}\left( \frac{1}{W} \right) dx$$

To simplify this expression, we'll now use Abel's Identity:

$$W = W_0 e^{-\int P(x) dx} \hspace{2.0em} \text{(where $W_0$ is a constant of integration)}$$

We get:

$$du = \frac{d}{dx}\left( \frac{1}{W_0 e^{-\int P(x) dx}} \right) dx$$

$$du = \frac{1}{W_0}\frac{d}{dx}\left( \int P(x) dx \right) e^{\int P(x) dx} dx$$

$$du = \frac{P(x)}{W_0} e^{\int P(x) dx} dx$$

$$du = \frac{P(x)}{W} dx$$

Now we can finally substitute into the Integration By Parts formula. This yields:

$$\int \frac{y_2'f + y_2f'}{W} dx = \frac{y_2 f}{W} - \int \frac{y_2 P f}{W} dx$$

Recall that this was actually the middle term in our expression for \(v_1\) above. We can now substitute it:

$$v_1 = \int \frac{-y_2R}{W} dx + \frac{y_2 f}{W} - \int \frac{y_2 P f}{W} dx + \int \frac{y_2Pf}{W} dx$$

$$v_1 = \int \frac{-y_2R}{W} dx + \frac{y_2 f}{W}$$

Now if we apply all the same steps to \(v_2\) we find the following formula:

$$v_2 = \int \frac{y_1R}{W} dx - \frac{y_1 f}{W}$$

But there's something peculiar about these two formulas. They are awfully symmetric. So let's substitute them back into the formula for \(y_p\):

$$y_p = y_1\left( \int \frac{-y_2R}{W} dx + \frac{y_2 f}{W} \right) + y_2\left( \int \frac{y_1R}{W} dx - \frac{y_1 f}{W} \right)$$

Cancelling out terms, we get:

$$y_p = y_1 \int \frac{-y_2R}{W} dx + y_2 \int \frac{y_1R}{W} dx$$

Something magical has now happened! Notice that \(f(x)\) has completely vanished! We had used \(f(x)\) as an arbitrary constraint, but in our overall solution it vanished entirely. So really, any constraint you set in that step has no effect. More importantly, look at the result we just got. It is exactly the same as the result we had when we set the constraint to zero!

$$y_p(x) = y_1(x) \int \frac{-y_2(x) R(x)}{W[y_1, y_2](x)} dx + y_2(x) \int \frac{y_1(x) R(x)}{W[y_1, y_2](x)} dx$$

We have just proved something amazing - something fundamental to second-order differential equations. For any second-order differential equation, the above equation fully governs the relationship between the two solutions of the homogeneous equation and the particular solution. It turns out that setting the constraint to zero in Variation of Parameters is really just a convenient simplification. It made our algebra way easier, and we arrived at the same result.

I don't think this is intuitive. Comparing the systems of equations we developed in the two sections, it was not at all obvious that they would both eventually converge to the same solution. This is extremely fascinating to me, and now I feel that Variation of Parameters is a solid method with no awkward assumptions in it.

I hope you find this result as exciting as I do. I look forward to any discussions if you feel I've missed something, or should add something.

Tags: math Differential Equations

Comments

No comments have been posted.

Add a comment


Long awaited Fantasi updates

March 20, 2014, 12:58 a.m. | By Michael Oliver

Hey everybody,

It's been quite a while since I posted a Fantasi update. I think I hinted previously that I was working on a big re-haul of the codebase, and this is now ready for release.

The big shift, is that Fantasi is now GPU-based rather than CPU based. I've also shifted the direction of my development toward it being a realtime raytracer, rather than simply rendering out to a file.

By doing this, I hope to accomplish a few things. Firstly, I'm using OpenGL's new Compute shaders; this is a relatively new technology, and I've noticed that there are few examples of it available online. So I hope that Fantasi will prove to be a great example for others to learn from.

Secondly, I would love to abstract Fantasi into a generic rendering engine that could be integrated into video games. It's a long shot, but I've seen at least one other project where someone is trying to create a game engine based around a raytracer.

I'll begin posting more articles, particularly about some of the challenges I've faced and how to overcome them. In the meantime, check out the Fantasi Project Page for more information. The full source code is also available on the GitHub page.

Here's a nice video detailing some of the features of Fantasi, with some gorgeous visuals. I also discuss at a high level some of the challenges I face with future iterations of Fantasi. Be sure to watch it in HD.

And of course, this wouldn't be complete without a nice hi-res screenshot of Fantasi in action. (Click for the full resolution image)

That's all for today folks. Leave a comment if you've got ideas for me.

Tags: C++ raytracing 3D Raytracer Fantasi Visual Studio OpenGL

Comments

No comments have been posted.

Add a comment


NES Emulator

Feb. 28, 2014, 2:36 a.m. | By Michael Oliver

I wrote an NES emulator a few months ago, and finally got around to hosting it on GitHub and posting a short demo video.

NESEmu (probably not a very original name) is an NES emulator written in C++ (with a few C++11 features scattered). It was written for fun, so it is not designed to support every NES game out there. However it does support a few important ones such as Super Mario and Legend of Zelda. It also supports saved files for games that had them.

Compilation requires Visual Studio 2012 or greator, and running the application is Windows-only (for now).

For more information and source code, visit the GitHub page here: NESEmu on GitHub

Tags: C++ Visual Studio NES emulator

Comments

No comments have been posted.

Add a comment


VM Instructions

Jan. 27, 2014, 1:51 a.m. | By Michael Oliver

A short time ago I found myself implementing a simple virtual machine to handle MIPS assembly instructions. Designing the virtual CPU, I wanted something efficient as well as readable, and started thinking about my design. Think of this as a guide in how to write the CPU for a virtual machine with minimal yet efficient and readable code. I'll now guide you through my process. I started by writing out a few simple instructions to handle: add, sub, and mult.

void CPU::Instruction(uint32 op) {
int32 s, t, i;
int64 temp;
uint32 opcode = op & 0x3F;
switch(opcode) {
case 0x20: // add
t = Reg[(op >> 16) & 0x1F]; // Reg[] = registers
s = Reg[(op >> 21) & 0x1F];
temp = s + t;
Reg[(op >> 11) & 0x1F] = temp;
break;
case 0x22: // sub
t = Reg[(op >> 16) & 0x1F];
s = Reg[(op >> 21) & 0x1F];
temp = s - t;
Reg[(op >> 11) & 0x1F] = temp;
break;
case 0x18: // mult
t = Reg[(op >> 16) & 0x1F];
s = Reg[(op >> 21) & 0x1F];
temp = s * t;
hi = temp >> 32; // hi and lo are CPU members
lo = temp & 0xFFFFFFFF;
break;
}
}

So that was the first iteration. I'm doing a bunch of register arithmetic, and stuff with each instruction that just doesn't look pretty. It won't do. Also, MIPS has a lot of instructions, so writing them out like this one by one is a pain - and then maintaining it is even worse. On top of this, the switch statement is going to be massive, which means a (very slight) loss in efficiency (you'll see why I say this soon).

We'll start with some #defines to clean things up. In MIPS, we have what are called the S, T and D register encodings, and we see these pop up very often. So the first step is to make these #defines.

#define S Reg[(op >> 21) & 0x1F]
#define T Reg[(op >> 16) & 0x1F]
#define D Reg[(op >> 11) & 0x1F]

With these, the code above becomes slightly more readable. We get the following:
template<uint32 opcode>
void CPU::Instruction(uint32 op) {
int32 s, t, i;
int64 temp;
switch(opcode) {
case 0x20: // add
t = T;
s = S;
temp = s + t;
D = temp;
break;
case 0x22: // sub
t = T;
s = S;
temp = s - t;
D = temp;
break;
case 0x18: // mult
t = T;
s = S;
temp = s * t;
hi = temp >> 32;
lo = temp & 0xFFFFFFFF;
break;
}
}

Now comes the fun part. What we want is to have minimal overhead coming from the switch statement. So what we do is get rid of it entirely (or at least force the compiler to). If you look at the code above, I've actually done this step by adding a template parameter for the opcode. This means that for every opcode, we generate a different version of the function. This in turn allows the opcode to be known at compile time, which means the compiler can just optimize out the switch statement, causing every instruction to have its own, very short piece of code generated.

Now we just need to be sure that the compiler generates each instruction. We also need the lookup for each instruction to be fast.

So we create an array of function pointers that indexes on the opcode. In otherwords, suppose our array is called Ins. Then since add's opcode is 0x20, we can just call (*Ins[opcode])(op).
The code that follows next does two things: first, it declares the array of function pointers; second, it initializes each element in the array with the correct function pointer inside the CPU constructor. Note that there are further improvements that can be made here by having the functions static, and using some initializer lists (C++11), though I hope you agree with me that going too much further will be making this part of the code pretty unreadable.

void(CPU::* Ins[0xfff])(uint32);
CPU() {
Ins[0x20] = &CPU::Instruction<0x20>; // add
Ins[0x22] = &CPU::Instruction<0x22>; // sub
Ins[0x18] = &CPU::Instruction<0x18>; // mult
}

We can also clean this part up a bit more with another define:
#define c(n) Ins[0x##n]=&CPU::Instruction<0x##n>;
CPU() {
c(20) c(22) c(18)
}

Now there's just one last thing to get rid of: duplicate code. My method for this is two-fold. Firstly, I recognize that the switch statement itself has a lot of repetition. So I use the following #define:
#define def(opr,code,val) case code : val break;

Next, I look for all the similar pieces of code, and "group" them together using more #define's. The best way to explain this process is to just show you the resulting code after both steps.
template<uint32 opcode>
void CPU::Instruction(uint32 op) {
int32 s, t, i;
int64 temp;
#define A t = T;
#define B s = S;
#define C temp = s + t;
#define D temp = s - t;
#define E temp = s * t;
#define F D = temp;
#define G hi = temp >> 32;
#define H lo = temp & 0xFFFFFFFF;
switch(opcode) {
def(add,0x20,A B C F)
def(sub,0x22,A B D F)
def(mult,0x18,A B E G H)
}
}

Look at that switch statement. We can clearly see each part of each instruction, and it is all super concise. I've made a list of #defines which are quite easy to reference, and ensure we aren't duplicating code.

So in summary, we were able to make our code short, efficient, and readable all at the same time. With this framework in place, we can now implement the rest of the instruction set and we'll get such concise code it seems like cheating. I'll leave it as an exercise to implement the rest, however as an example, here is my switch statement after implementing all the instructions I needed (note I modified the #defines slightly to avoid future naming conflicts):

switch(opcode) {
def(add,0x20,Ai Bi Ci Fi)
def(sub,0x22,Ai Bi Di Fi)
def(mult,0x18,Ai Bi Ei Gi Hi)
def(multu,0x19,Ai Bi Ii Gi Hi)
def(div,0x1A,Ai Bi Ji Ki)
def(divu,0x1B,Ai Bi Mi Ni)
def(mfhi,0x10,Oi)
def(mflo,0x12,Pi)
def(lis,0x14,Qi)
def(slt,0x2A,Ai Bi Ri)
def(sltu,0x2B,Ai Bi Si)
def(jr,0x08,Bi Ti)
def(jalr,0x09,Bi Ui Ti)
def(lw,0x8C0,Bi Vi Wi)
def(sw,0xAC0,Ai Bi Vi AAi Xi)
def(beq,0x100,Ai Bi Vi Yi)
def(bne,0x140,Ai Bi Vi Zi)
}

I hope this guide was useful and you learned some ideas from the use of template and macro magic. If you've noticed anything inaccurate, or anything that can be improved, please let me know in the comments.

Tags: C++ VM MIPS

Comments

No comments have been posted.

Add a comment


Page 1 of 4

Tags