from Wikipedia: fourier division.

Here is a screenshot of the same:

(view in full-resolution)What is the logic behind this algorithm?

I know it can be used to divide very large numbers, but **how exactly does it work?**

this appears to be a clever transformation of the Long Division algorithm. The clever parts seems to be that they are only using the division operation for the first "digit", a1, and avoid having to use the other a(x)'s in the same way by applying them in the next step by subtracting their product (against the partial quotient) from the interim remainder.

That this can validly be done and that it always works is probably due to the fact that the "digits" (base 100, in this case) aren't real digits and can legitimately assume values both greater than their base (i.e., over 100) and even less than zero. This allows greater flexibility in the timing of the application of each "digit" to the operation like, for instance, deferring the application of the secondary digits of the divisor (a(x>1)) until after a partial quotient is created from the prior step's division by a(1), which in turn allows them to be applied as a product subtraction, rather than a division operation.