1/0 ZeroDivisionError... But does it need to be?

I have been developing in python for sometime and it occurs to me that from a practical point of view 1/0 = 1, in that 1 isn’t being divided at all.

When I write in provisions for ZeroDevisionError’s this is what i have returned, the numerator value as the result.

I understand that the standard for 1/0 is either undefined or infinite, as a value cannot be divided by zero, that division is infinite.

However, from a practical point of view if i was to divide a value by zero, I would not be effecting it, i.e: divide a pencil zero times, which is no different then to say do nothing.

I feel like this should be the excepted output, rather then raising an error.

x/0 = x

My previous case with handling ZeroDivisionErrors was the opposite (returning 0), because adding anything to the equation would produce false data points in a culmulative calculation. I think it was in conjunction with an fps logger class.

I’m personally fine with it being raised as an exception. We can handle exceptions however we want, but a mathematically false value (as the default) would cause hidden issues further down.

1 Like

Adding to what iceythe said, having x/0 anything but undefined (and and exception) or +/- infinity is probably a bad idea from a language point of view for a few reasons.

  • Python is used for scientific calculations (among other fields that depend on the math following a strict set of rules). There’s a lot of dependence on the math being right, or at least within a certain error threshold. Changing something like how division works in an inconsistent way is opening yourself up to a world of hurt.
  • Having x/0 == x/1 == x will cause some hard to track bugs. You may get an unexpected result and have to hunt down the bug. If you come from a general Python or programming background, you may skip over the line, since it looks like it should be right. Same thing with having something like 1.0 (float) + 1.0 (float) = 2 (int). It’s technically right, but you’re swapping types in inconsistent ways. Look at JavaScript as an example of how being not strict can cause a whole class of really tough to solve bugs.
  • 0 can mean many things for different algorithms. For example, you may want to set a value to 0, 1, infinity, error, etc. It’s usually best to go with the strictest and most consistent option first and let someone build less strict systems on top of it and let developers handle inconsistencies how they want.
1 Like

Thanks guys for the response, I suppose I didn’t consider a number of use cases, It did occur to me before posting that 1/0.00001 or the like produces a extremely large number, drastically different from the proposed 1/0 = 1 result, its apparent where this would cause issues, more so now.

So I suppose I understand further why the accepted answer is infinite or undefined.

However it is interesting to me that in a physical sense to divide something by 0 would be to do nothing, because you cant, which of course gets me pondering infinity, its representation and existence in its entirety.