Computer science people, I am looking to be pointed in the right direction. I am looking for a solution to a problem, and I think the answer might lie in an area of Computer Science/Statistics/O.R. that I know very little about. Here's the deal:
I have two mathematical functions, f_1 and f_2. Both functions intersect with the Y axis at exactly two
(
Read more... )
f_1 is easy. It's just a quadratic equation. When I say it intersects the x-axis in two points, I mean that it looks like the quadratic equation in this image:
http://www.freemathhelp.com/images/lessons/graph14.gif
and I really don't care what happens to it below y=0. In fact, I'll assume x=0 whenever y<=0. So I am just trying to say that it creates an enclosed area above the x axis. Maybe that doesn't matter...
f_2 is more interesting. It's actually defined in terms of a bunch a samples. For various x values I have sampled y values. For each of these samples, x>=0 and y>=0. Even though the function data comes from samples, I have it figured out so that you can ask for any x. If an x is in between two sample values, I do a linear estimate from the slope of the two adjacent samples. Above and below my samples in x, y is always 0. Hopefully you're with me so far.
Last but not least, there are two parameters of f_2 that I can tweak, even though f_2 is defined in terms of samples. One parameter is an x offset. I can shift the entire function left or right arbitrarily. The second parameter is a little harder to describe, but it's basically a stretch operation in the x axis. I can grow or shrink the distance between each of the samples uniformly.
Again, the problem is to select the shift and x-stretch for f_2 so that the area under f_2 is maximized without escaping the area of f_1. My memory of calculus tells me I want to minimize (integral(f_1) - integral(f_2)).
Hopefully this makes a little more sense. If not, I can draw a picture! Thanks!
Reply
So f1 = ax^2 + bx + c. Easy enough.
f2 = m12 * (x - \beta * (x1 + \alpha)) + y1 when x1 < x <= x2,
m23 * (x - \beta * (x1 + \alpha)) + y2 when x2 < x <= x3,
etc.
where (x1, y1) is the first sample, etc. and m12 is the slope between samples 1 and 2 (y2 - y1)/(x2 - x1). \alpha is your horizontal shift factor, and \beta is your horizontal scaling factor.
I think the above description is right, although it might have bugs. Probably worth graphing a few versions and seeing if it's right.
What you actually want is integral(f_1 - f_2)dx. The distinction is pretty important - you want the xs to line up right. Then you'll have to differentiate wrt alpha and beta. Then find critical points (where both derivatives are zero), then check the determinant of the Hessian for concavity. That should give you all the maximums.
I think that's easy enough to do in Mathematica (probably like an hour or so of tweaking). But it's all regular calculus stuff. But if you're going to use Mathematica anyways, I think it has a max function :).
It's been awhile since I've done any of this, so I'd check what I said (and your results) for sanity. I also haven't used Mathematica in awhile either.
Reply
Leave a comment