Good question, though I don't feel comfortable giving a totally absolute answer. It's one of those "it depends" things as with any very technical matter.
But as a rule of thumb, when working with double-precision floating point numbers most operations are only trustworthy to about a precision of 10^-13 or so. This is because, among other reasons, that while double can store numbers much more precise than that, precision losses occur when subtracting two similar numbers for example. It's hard to avoid these kinds of operations in most algorithms of course.
Here is a sample calculation I just did in the Julia REPL to illustrate this:
Start by generating some random numbers:
julia> r1 = rand()
julia> r2 = rand()
julia> r3 = rand()
Now do some arithmetic and then undo it
julia> x = (r1/r2)*r3-100
julia> y = (100+x)/r3*r2 # y should equal r1 if using exact arithmetic
0.034238108102722764 # you can see the last few digits differ from y