Helpful Information
 
 
Category: Software Design
efficient way to find only the closest objects

I've got an array of objects. Each object has two properties: x and y, which are its coordinates on a plane.

If I want to find all of the objects in the array that are less than 20 units away from a given object, what is the most efficient way to do it?

All that I know to do is loop through the array, and for each object use the distance formula to find its distance from the given object.

Is there a better way? Is this the only way?

No. You need to check each node at least once, which is what you do by iterating through the array. The algorithm is O(N), which is pretty good...

thanks for the reply... I'm kind of disappointed, though. I was hoping that there was some sort of short cut, though I can't imagine what it would be.

That'll have to do.

thanks.

Well, unless you have a psychic computer that's gonna be hard.

It's probably possioble to improve the data structure, for example, you can store the objects in "buckets", that is, you split the coordinate plane into square areas, and think of them as buckets. If you choose the bucketsize so that you only have to check the buckets surrounding the bucket in which the object you're testing from is located you might save some time. It depends on your data.

Of course, improving the datastructure will cost you time, so you might end up with no total gain, even if the algorithm in itself is greatly speeded up...

I think that I'll just stick with looping through the array... although your bucket idea is interesting, and something that never would have occurred to me. I may play around with that, too.

thanks for your help.


-will

One question I might ask, though: is the nature of the data random, or is there some order to it? In other words, what is the mechanism by which these points are assigned in the first place? Knowing that, there might be some kind of a way of quickly "disqualifying" some sets of points. I don't have a specific idea in mind; just pushing this around in my head. I love geometrical problems.

Whenever I think about computer problems, I try to think a little more about how I would solve it without a computer. For example, how would you or I optimize that search if we were doing it manually?

First we would glance quickly at the points on the plane to determine which ones seem too far away to bother with. We would also not waste time on the ones that are obviously too close to worry about. We would then estimate the general radius of our search, and start checking those which seem to be around that border.

But of course, this is because we have two ways of checking distance: one is an inaccurate estimate with our eyes, while the second is an accurate reading with a ruler or string. Can a computer do something similar? Let's think about it some. Maybe there is a method of doing a quick low-accuracy search to eliminate a great number of candidates on the first pass, and then we pick up the hard numbers on the second pass. Or it could even be a multipass optimization, if there is an extremely large array of numbers.

In a way, this is what andnaess's buckets provide, but maybe more is possible. What is the actual distance formula, and what do you mean by "twenty units"?

Let's say for our purposes the objects are travelling on randomly-chosen paths... sine waves, or something similar.

I want the objects to be aware of the other objects that are within a certain radius of them... that's what the 20 units is. It's actually 20 pixels on the screen, I guess.

I haven't tried to do anything with andnaess' bucket idea yet, but it has really intrigued me.

The distance formula I was going to use is just the one that I have always used in math class:


_________________________
d = \/(x2 - x1)^2 + (y2 - y1)^2


is there another one?

My hunch when reading your first post was that this was a collision detection like problem, and now I'm quite positive that is, so maybe you could get some ideas from this webpage on collision detection:
http://www.permutationcity.co.uk/programming/collision.html
(You'll find two bucket solutions there)

And I'm sure there are plenty more available, so do a search on Google for collision detection algorithms.

The bucket solution is the closest you can get to the computer doing a quick "low-accuracy searching".

There is an easy solution which uses kind of bucketing but in its simplest way.

You are using the square root formula to get all the points within the radius. Still use it but with a little modification.

Suppose you have (18, 47) and (97, 45). You should not use the sqrt formula, because you can test at first that

abs (97 - 18) > 20

The distance is greater than 20 in a single coordinate, so the real distance between the points is greater than 20.

So first you check the distance between points in each coordinate, and if you get that the distances are between your treshold, then use the sqrt.

With this, your algorithm is still O(n), because you have to check all the points, but in fact would be a liiiitle faster because you won't be using the costly sqrt function too much.


If you want a little more of improvement, sort the array using as key the value of the coordinate with more dispersion on it, and then do a dicotomical search to find where the point that you are analizing is (or would fit).

Then, look and test for all the surrounding points in the array (in both ways, right and left). Whenever you find a point which is outside or the range in the key axis, stop there. When you have found all two "outside" points, it's finished.

Thanks, guys, for your input. I have decided to just loop through each item, the boring, slow way.

I think that figuring out how to tie a 'bucket' sort of system in to what I'm doing would be too much for me, at this point.

But, thanks to your suggestions, I won't actually be taking the square root, which will save me some time, I expect. It probably would not have occurred to me to do it this way without your help.

So, thanks.


-Will

I admit, I may not know what I'm talking about since I'm only learning about this right now, but how'bout applying a hash function to this? Will that work in this instance?


They're O(1)....

You are right, a hash table is O(1), but as you have to go through every node, you would do N queries, so N * O(1) = O (N).

You could use a method similar to buckets, but a little faster to code.

The distance formula packs a punch on your cpu compared to basic math (+ ,-,) because of all its squares and squareroots.

The most effective (codetime and runtime) I have personaly used is a multi-pass method to eliminate obvious points.

Pass 1:
Search via the X-axis and eliminate points more than 20 points from the test-point


for(i=0;i<size(points);i++){
if(Math.abs(points[test].x - points[i].x)<=20){
points_pass2[] = points[i];
}
}


Pass 2:
Search the accepted points from the last pass via the Y-axis and eliminate points more than 20 points from the test-point


for(i=0;i<size(points_pass2);i++){
if(Math.abs(points[test].y - points_pass2[i].x)<=20){
points_pass3[] = points_pass2[i];
}
}


Pass 3:
Apply the distance formula to the final remaining points to get an accurate read


for(i=0;i<size(points_pass3);i++){
if(Math.sqrt((points[test].y-points_pass3[i].y)(points[test].y-points_pass3[i].y)+(points[test].x-points_pass3[i].x)(points[test].x-points_pass3[i].x))<=20){
points_within20[] = points_pass3[i];
}
}


You then have your final array: points_within20 That contains all of the points at or closer to 20 units from your central point.

Since you eliminated the points more than 20 from the x and y axis first used a very low-profile quick-execution equation your loop with the distance formula contained much less points, and therefore used much less cpu-time.

I hope this helped

[eko]

Eko-

Is it usually faster to do the quick distance comparison in two loops, one for x and one for y, than in one, that will compare both x and y at once?

I haven't done any testing on my own, yet. It just seemed to me like it may be as fast to do both comparisons in one loop.

I like your idea, though, and I've used it. Thanks.


-will

Well, Ekostudios has shown an implementation of the first part of the method that I explained some posts ago.

I think that explaining the method as text is not as clear as source code, so sorry guys.

Does someone wants to "pseudoimplement" the second part of my method ?

Hi,

I noticed that a lot of people have been advocating multipass solutions to this problem - ie:

1) Quick Reject in X
2) Quick Reject in Y
3) Distance calculation

I would, however, be rather surprised if it wasn't faster to do a single pass test based on the magnitude of the distance squared:



if(dx*dx + dy*dy > rad2)
{
//...
}


I'll do some timing results and see.

Basically, a single magnitude squared test turns out to be about 20-25% faster than the two-layer test, on a P4 under VC++ 6.0.

Time for 10^8 distance rejection/acceptance calculations was:

1) Magnitude squared: 917084 / (1.19 * 10^6) s

2) Two-layer: 1286742 / (1.19 * 10^6) s

The code follows. arrayx[] and arrayy[] are initialised to random values in the range 0...1000, as are px and py.

Magnitude squared:


for(ctr = 0, count=0; ctr < 1000; ctr++)
{
dx = px-arrayx[ctr];
dy = py-arrayy[ctr];
if(dx*dx + dy*dy < 100)
{
count++;
}
}


Two-layer + magnitude squared:


for(ctr = 0, count = 0; ctr < 1000; ctr++)
{
dx = px-arrayx[ctr];
dy = py-arrayy[ctr];
if(dx > -10 && dx < 10 && dy > -10 && dy < 10)
{
if(dx*dx+dy*dy < 100) count++;
}
}


Neither of these is safe if dx*dx + dy*dy > 0x7FFFFFFF. Options for making this safe include using 64-bit integers for the mag^2 calculations, and also using double-precision numbers.

The timing results for the double precision case are even more skewed in favor of magintude squared only:

10^8 double precision distance tests:

1) Mag-squared = 1.5582 / 1.19 s

2) Two-layer = 4.2211 / 1.19 s

I figure that the 3x discrepancy in the timing results for the double precision case is probably because either dx or dy is being swapped into / out of memory more times.

Or it could just be that the more complex condition breaks the branch prediction.

Whoops.

If I replace that UGLY if statement with



//integer version
if(abs(dx) < 10 && abs(dy) < 10) ...




//double precision
if(fabs(dx) < 10 && fabs(dy) < 10) ...


Then the axis-aligned bounding box is faster. Duh. After that change and some inner-loop optimisation, we get:

Integer:

1) Mag^2 test: 9.4 / 1.19 s

2) Two-level: 6.7 / 1.19 s

Double Prec

1) Mag^2 test: 1.52 / 1.19 s

2) Two-level: 9.2 / 1.19 s

so... what you're saying is... the two loops are faster if you're dealing with integers, and one loop is faster if dealing with doubles (like me)?










privacy (GDPR)