Just wait until they learn that computers subtract by adding, and multiply by adding, and divide by adding, and do exponents by adding, and do logarithms by adding.
it multiplies by using a complex set of gate arrays that do some adding, otherwise hardware multipliers are like multiplier tables built up by logic gates. Early CPUs did multiplication by adding (essentially multiplications are just recursively adding the same numbers to themselves), and if you were lucky it was optimized to use bit-shifts.
Division is a lot more complicated though. I did some optimization by multiplying with reciprocals instead, but speed gain was negligible due to memory bandwith limitations.
Just wait until they learn that computers subtract by adding, and multiply by adding, and divide by adding, and do exponents by adding, and do logarithms by adding.
it multiplies by using a complex set of gate arrays that do some adding, otherwise hardware multipliers are like multiplier tables built up by logic gates. Early CPUs did multiplication by adding (essentially multiplications are just recursively adding the same numbers to themselves), and if you were lucky it was optimized to use bit-shifts.
Division is a lot more complicated though. I did some optimization by multiplying with reciprocals instead, but speed gain was negligible due to memory bandwith limitations.
There must be add-vantages to this design.
And don’t get me started on demorgan’s law!
Wait…is it All…just adding? ಠ_ಠ
Always has been