Big-O notation conveys us information about how a given algorithm run time scales with respect to a given input size. However, one has to be careful on the interpretation of this scaling (limiting) behaviour. First of all, when we talk about $O(N^2)$, it doesn't mean that it is valid for any input size. Let's illustrate that asymptotic behaviour shown by
big-O notation could be “misleading” for “smaller” input. Let’s take
number of operation $T(N)=4 N^2 – 2N$. $T’(4)$ must be 4 times of $T’(2)$ if
we consider as $T’(N) = N^2$. But if we use $T(N)$ instead, $T(4)$ would be $4.666667$ times of $T(2)$.
A good example in this direction is
N-body force calculations (or
here). An $O(N^2)$ algorithm performs better compare
to $O(NlogN)$ ones up to a break-even point even 50K particles. This is a
good paper explaining this;
Pringle, Gavin J. "Comparison of an O (N) and an O (N log N) N-body solver."