These are some lessons I’ve learned as I attempt to accelerate R code. It’s all common knowledge, but sometimes common knowledge doesn’t sink in until you’ve made the mistakes yourself. Looking them over, they apply to Numpy / Python as well.
Profile first, and then optimize the slow parts. After using R for several years I can spot glaring errors, but beyond that my intuition is usually wrong.
Learn to debug really well.
Check if someone has already written fast code that does what you want.
The speed of popular packages varies widely. For example, data.table is very fast.
When comparing different implementations, make sure they actually do the same thing.
If you run out of memory then your program will be slow, and it probably won’t run at all.
Non vectorized code will be slow. Vectorized code is often fast, but not always.
R can work surprisingly well on clusters / distributed systems.
Parallel execution is sometimes faster. Use
mapply in your
code, so if you decide to go parallel then you can just drop in
Rewrite critical parts of the code in a compiled language like C if you really need speed. This is particularly relevant for iterative computations that can’t be vectorized. Try other options first, because this makes the code much more complex.