Skip to content

Tools and Development workflows

Ben Parks edited this page Dec 17, 2024 · 1 revision

Faster R package (re)installation

Purpose: Make it faster to re-build + test BPCells, and make it faster to install + test other packages

  1. Set up ~/.R/Makevars to support faster build times

    • Install ccache and mold (generally available through package managers like apt). ccache caches compilation outputs to speed up repeated compiles of identical input files. mold is a fast, parallelized linker that greatly speeds up the final linking step in compilation

    • These flags will make you default to using mold + ccache and default to parallel compilation with 12 cores.

    • (-fuse-ld=mold only works on clang and gcc 12.1.0 and newer according to the mold docs. If your gcc is older either upgrade or install clang)

    • Add to your ~/.R/Makevars file:

    • LDFLAGS=-fuse-ld=mold 
      CC=ccache gcc 
      CXX=ccache g++
      MAKEFLAGS=--jobs=12
      
  2. Set up R to install binary packages from Posit package manager link

    • Add to your ~/.Rprofile file:

    • options(BioC_mirror = "https://packagemanager.posit.co/bioconductor")
      options(BIOCONDUCTOR_CONFIG_FILE = "https://packagemanager.posit.co/bioconductor/config.yaml")
      
      options(repos = c(CRAN = "https://packagemanager.posit.co/cran/__linux__/noble/latest"))
      options(
        HTTPUserAgent = sprintf(
            "R/%s R (%s)",
            getRversion(),
            paste(getRversion(), R.version["platform"], R.version["arch"], R.version["os"])
          )
      )
  3. When you need an equivalent of python's virtualenv, use the renv package (link). It will set up an isolated set of R package installs for all R sessions started within the folder.

Generating the documentation site

Purpose: Be able to locally generate the docs site to check doc-string formatting and update the BPCells documentation website

Getting started:

  1. Install pkgdown install.packages("pkgdown")
  2. (optional) To be able to update the github pages branch, from your local BPCells checkout, run git worktree add r/docs docs-html. This will make a folder at r/docs that tracks the docs-html branch. The git worktree feature basically lets you have two simultaneous checkouts of different branches on your computer (main docs link)
  3. In the r/ folder, run devtools::document() to generate the man/*.Rd files that pkgdown uses as inputs
  4. Run a pkgdown function to re-generate the relevant part of the website:
    • pkgdown::build_reference() for the function reference
    • pkgdown::build_news() for the news
    • pkgdown::build_home() for home page
    • pkgdown::build_site() to build everything from scratch (a bit slow as it will re-run the tutorial vignettes)
  5. View the generated website by opening up r/docs/index.html in your web browser

Other items to know:

  • If you added a new function, be sure to list it in r/pkgdown/_pkgdown.yml in the desired section + position
    • This file also has many other configuration options for site setup. Search the pkgdown docs or github issues to find relevant options to adjust as needed
  • If you are updating function docs, always run devtools::document() first, or else pkgdown::build_reference() will not have your updated contents
  • If you commit and push to the docs-html branch, the docs website will update within seconds/minutes. Just commit from within the r/docs folder if you're using the suggested worktree setup

Profiling C++ code

Purpose: Help optimize C++ code by identifying which parts are responsible for the majority of runtime

Setup + usage instructions here

Nice features of gperftools:

  • Runs a small web server to let you review profiling results in your web browser
  • See time spent on a per-function level (including a helpful flame-graph view)
  • View time spent within a function at approximately a per-line view
  • See the assembly code that corresponds to each line of source code

Cons of gperftools:

  • Slower to get set up than just timing a chunk of code.
  • Compiler optimizations sometimes mean that per-line metrics are not available in a function

General profiling tips:

  • gperftools-style function/line-level profiling won't help diagnose systemic problems like poor use of cache, memory bottlenecks, file I/O bottlenecks.
  • In BPCells, often just a few functions or lines of code are responsible for a large majority of the runtime. You should only focus your optimization on these small number of bottlenecks.

Checking generated assembly of C++ functions

Purpose: Check the generated assembly for hot function loops to see what optimizations are or are not successfully applied.

Steps:

  1. Compile the code with optimizations enabled:

    library(pkgbuild)
    flags <- pkgbuild::compiler_flags(debug=FALSE)
    new_compiler_flags <- function(debug=FALSE) {flags}
    assignInNamespace('compiler_flags', new_compiler_flags, 'pkgbuild')
    
    devtools::load_all("r", recompile=TRUE)
  2. Locate the BPCells.so file and search for the address of the function you care about:

    nm -gDC r/src/BPCells.so | grep pseudobulk_matrix

    This results in the following output:

    000000000027cca0 T _BPCells_pseudobulk_matrix_cpp
    000000000021d0e0 T pseudobulk_matrix_cpp(SEXPREC*, std::vector<unsigned int, std::allocator<unsigned int> >, int, bool)
    000000000033c100 W BPCells::PseudobulkStats BPCells::pseudobulk_matrix<double>(std::unique_ptr<BPCells::MatrixLoader<double>, std::default_delete<BPCells::MatrixLoader<double> > >&&, std::vector<unsigned int, std::allocator<unsigned int> > const&, BPCells::PseudobulkStatsMethod, bool, std::atomic<bool>*)
    000000000033b650 W BPCells::PseudobulkStats BPCells::pseudobulk_matrix<float>(std::unique_ptr<BPCells::MatrixLoader<float>, std::default_delete<BPCells::MatrixLoader<float> > >&&, std::vector<unsigned int, std::allocator<unsigned int> > const&, BPCells::PseudobulkStatsMethod, bool, std::atomic<bool>*)
    000000000033ab70 W BPCells::PseudobulkStats BPCells::pseudobulk_matrix<unsigned int>(std::unique_ptr<BPCells::MatrixLoader<unsigned int>, std::default_delete<BPCells::MatrixLoader<unsigned int> > >&&, std::vector<unsigned int, std::allocator<unsigned int> > const&, BPCells::PseudobulkStatsMethod, bool, std::atomic<bool>*)
    

    We can see that the third line has the address we want: 0x000000000033c100

  3. Use gdb to print the assembly code for the function:

    gdb -batch -ex 'file r/src/BPCells.so' -ex 'set disassembly-flavor intel' -ex 'disassemble 0x000000000033c100' > pseudobulk_matrix.asm

    To use the debug symbols to get source lines, we can use disassemble /s 0x000000000033c100, which will print out the corresponding source line interleaved with the assembly code.

  4. My recommended workflow is to make two files, one with and one without source code annotations. Find the location of the relevant code using the source annotations, then read the plain assembly code directly to see what's going on.

Other options:

Ghidra: You can also try using ghidra which is a very good free binary analysis tool. It can do nice things like printing arrows for where jumps go and display which registers contain which variable names. However, I was not able to get it to show the source code mapping in a reasonable way

gdb_disassemble.py: A wrapper that does steps 2+3 more easily is available in this gist, which in this example would be used as gdb_disassemble.py --show-source r/src/BPCells.so psedubulk_matrix output.asm

Using GDB to locate C++ crashes

Purpose: Diagnose where a C++ error is happening

  1. Set up your R session:

    • Open an R session and a separate terminal window, and run your code up until just before the line that will cause the crash
    • Get the R process ID using Sys.getpid()
  2. Set up GDB:

    • In a separate terminal window, run sudo gdb -p <ID of the process>
    • Once the GDB prompt comes up, type the command catch throw then press enter. (You can skip this if the crash you're diagnosing crashes the whole R session)
    • Finally, type c then enter, which will allow your R session to run again
  3. Back in your R session, run the line that will cause the crash. It should pause rather than crashing or printing an error message

  4. In the GDB window, you should see a prompt again. Type where and enter to see the location of the crash in C++. It should look something like this, with BPCells C++ functions listed just after the first few list entries

    Thread 1 "R" hit Catchpoint 1 (exception thrown), 0x000072d82aabb35a in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
    (gdb) where
    #0  0x000072d82aabb35a in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
    #1  0x000072d7a71ca401 in BPCells::FileNumReader<unsigned int>::FileNumReader (this=0x5b38320b2830, path=0x5b382503eb00 "/home/bparks/dev/github/bnprks/BPCells/test_mat/shape")
        at bpcells-cpp/arrayIO/binaryfile.h:83
    #2  0x000072d7a72d398c in std::make_unique<BPCells::FileNumReader<unsigned int>, char const*> () at /usr/include/c++/13/bits/unique_ptr.h:1069
    #3  BPCells::FileReaderBuilder::openUIntReader (this=this@entry=0x7ffec3dbf870, name="shape") at bpcells-cpp/arrayIO/binaryfile.cpp:103
    #4  0x000072d7a7219ea5 in BPCells::StoredMatrix<double>::openPacked (rb=..., load_size=load_size@entry=1024) at bpcells-cpp/matrixIterators/StoredMatrix.h:220
    #5  0x000072d7a71f5820 in dims_matrix_reader_builder (rb=...) at matrix_io.cpp:104
    #6  0x000072d7a71f5f18 in dims_matrix_file_cpp (dir="/home/bparks/dev/github/bnprks/BPCells/test_mat", buffer_size=buffer_size@entry=8192) at matrix_io.cpp:319
    #7  0x000072d7a72a622a in _BPCells_dims_matrix_file_cpp (dirSEXP=0x5b38335090a8, buffer_sizeSEXP=<optimized out>) at RcppExports.cpp:1060
    
  5. Type c in GDB again and press enter. At this point your R session should un-pause and crash/error.

    • If the R session does not unpause, and gdb displays another command prompt, there may be multiple exceptions being thrown (catch throw might pause on some things that aren't actually a BPCells error)
    • In this case, repeat running the where command followed by the c command until you actually get the crash you want
  6. Report the list of functions printed out by where to help diagnose the source of the problem

Helpful tips:

  • If at any point you want to quit GDB, first type Control-C which will stop your R program and bring up the GDB command prompt. From there, you can run the quit command to exit GDB
  • If you are using MacOS, you might find this debugging very challenging due to complicated permissions requirements. If you have trouble, I recommend giving up early as it is easier to reproduce your error on a Linux machine than to get a Mac to work.