Before profiling your code, it is always a good idea to (re-)evaluate the implemented algorithms. Maybe there are asymptotically faster algorithms available that solve your performance problems. If not, you need to profile your code.
cProfile
profiler or simply resorting to the timeit
module. In the first case, the usage is
from cProfile import Profile # ... profiler.enable() # your profiled code # ... profiler.disable() profiler.print_stats(sort=1)
If you want to profile an entire executable module, you could do so from the command-line instead
python3 -m cProfile -o profile.out your_module.py python3 -m pstats % read profile.out profile.out% callers your_function profile.out% callees your_function % quit
Timeit is better suited for smaller tests (to measure execution times of small code snippets). This can be done via
import timeit # timeit.timeit(stmt='pass', setup='pass', # timer=, number=1000000) # eg. print(timeit.timeit("[x**2 for x in range(100)]", number=10000))
Alternatively, you can use timeit
from your operating system's
command-line (
python3 -m timeit -n 10000 -s "from your_module import your_function as f" "f()"
For profiling C++ code, there are many options. An easy to start with way is the valgrind suite with its callgrind tool in combination with the kcachegrind graphical user interface. First, make sure everything is installed
sudo apt-get install valgrind kcachegrind
Make sure you've compiled your program with debugging information (compiler
flag -g
). Then generate profiling data via
valgrind --tool=callgrind ${YOUR_PROGRAM} ${YOUR_OPTIONS}
This will generate a file named callgrind.out.* which can then be analyzed using kcachegrind
kcachegrind callgrind.out.*