Summary
Problem-solving is the core activity of data science using scientific principles and evidence. On our side, there is an irresistible urge to solve the most generic form of the problem. We do this almost always from programming to formulation of the problem. But, don't try to solve a generalised version of the problem. Solve it for N=1 if N is 1 in your setting, not for any integer: Save time and resources and try to embed this culture to your teams and management. Extent later when needed on demand.
Solving for N=1 is sufficient if it is the setting
This generalisation phenomenon manifests itself as an algorithmic design: From programming to problem formulation, strategy and policy setting. The core idea can be expressed as mapping, let's say the solution to a problem is a function, mapping from one domain to a range
$$ f : \mathbb{R} \to \mathbb{R} $$
Trying to solve for the most generic setting of the problem, namely multivariate setting
$$ f : \mathbb{R}^{m} \to \mathbb{R}^{n} $$
where $m, n$ are the integers generalising the problem.
Conclusion
It is elegant to solve a generic version of a problem. But is it really needed? Does it reflect reality and would be used? If N=1 is sufficient, then try to implement that solution first before generalising the problem. An exception to this basic pattern would be if you don't have a solution at N=1 but once you move larger N that there is a solution: you might think this is absurd, but SVM works exactly in this setting by solving classification problem for disconnected regions.
Postscripts
- The title intentionally omits three, while it is a reference to Physics's inability to solve, or rather a mathematical issue of the three-body problem.
No comments:
Post a Comment