Helpful Information
 
 
Category: Software Design
Divide algorithm

Hello,

I'm working on a big number library (for learning purposes). I'm trying to come up with the best divide function for an arbitraraly long number (multiple of 32 bits - dword).

What would be best?

Repeated subtraction? or a combination of bit shifts and repeated subtraction?

what has the best worst case run time? average run time?

What would be the best way to come up with the combination of bit shifts + subtraction?

-cARL

repeated subtraction can be exponentiol in hte number of bits used; example:
bigNumber_N_bits / 1
needs 2 exp N subtractions. Each subtraction is again linear in N
So the result will be O(N*expN) (less than perfect)

shifting will be log in each output bit
with potential N output bits you get
O(NlogN) (better and likely optimal in O() notation)

As for the constants (practical programming), doing a full subtraction for each output bit is tough.
You might want to consider working only on a limited window of bits and keeping track of the operations you missed on the data outside of the window.
I don't know if it works at all (just an idea). It certainly won't spare you any of the O() operations,
but might localize data access a bit more and thus run faster. Window size should grow with log of bitnumber N, else the number of operations stored will be linear in N and you don't gain anything.

I hope this helps :)

The fastest known division algorithm (for normal cases) is called non-restoring division. If you do a search on google, you'll get plenty of hits (its what CPUs use).










privacy (GDPR)