Category Archives: Uncategorized

ML Inference Libraries and Who Uses What


“Optimized tensor library for deep learning using GPUs and CPUs”

  • Based on Torch, developed by Facebook. Python API
  • BoTorch library does Bayesian optimization (probabilistic models, optimizers, support for GPyTorch for deep kernel learning, multi-task GPs, and approximate inference)
  • PyTorch defines a dynamic computational graph (can quickly and easily modify models)
  • Takes advantage of Python’s native performance optimization 
  • Greg G: “my understanding is that PyTorch and TF do essentially the same thing well: automatic differentiation through arbitrary functions. I greatly prefer PyTorch because the API is much easier to use. That said, TF does have a large community and things like TF-probability, which supports probabilistic modeling. Edward is a different beast; it supports probabilistic modeling, so doing stuff like VI or HMC after you specify your model. Stan is similar, but I think it has more support. Anything that is gradient-based will be easy because auto diff. If you want more complex stuff, Uber AI’s Pyro is a probability framework built on PyTorch (it’s the equivalent of TF Probability).”
  • From Archit: compared to TensorFlow, PyTorch has a bit more control and flexibility in how you do inference
  • From Greg D: “My favorite by a good amount is PyTorch. Tensorflow is in second and edward is a distance third. I’ll start with the last. Edward is no longer supported and never had much active support or user community. Debugging is incredibly hard with cryptic error messages. Besides a few example models, it’s very difficult to implement custom models unless you’re an absolute edward expert. Even then it requires extending the language. PyTorch employs a dynamic computation graph, which means the computations that are executed can be determined at runtime, i.e., if the model itself is changing based on the inputs it’s easy to do. That also means it’s much easier to debug than Tensorflow because you can put in print statements everywhere and you can debug using the python debugger PDB. Tensorflow uses a static computation graph, which means the computations are effectively “compiled” before running a program. It makes it much more difficult to debug but the pro is that it’s more efficient and faster (in general, than PyTorch). Also, Caffe2 and PyTorch have now been integrated into the same tool, which is a plus for PyTorch. PyTorch and Tensorflow both have pretty active communities. And both have lots of models freely available on github. I’ve used PyTorch for anything from probabilistic models (like LMMs) to ML models like DNNs and generally anything that could benefit from autodiff. It’s also very easy to retrieve the gradients that PyTorch implicitly computes. You can use it to compute jacobians directly too. My rule of thumb is that whatever the community you’re working in uses the most is the thing to chose because the support will be the best and people will have run into the same problems as you and found solutions (e.g., stack overflow). Let me know if I could answer any more particular questions. Happy to help with pytorch if I can!”
  • This article explains PyTorch inference in a very clear, accessible way :



“Provides a collection of workflows to develop and train models using Python, JavaScript, or Swift, and easily deploy in the cloud, on-prem, in the browser, or on-device” has GPU support

  • High-level API based on Theano, developed by Google
  • User has to manually encode distributed computation vs PyTorch
  • TensorBoard, the visualization library used for debugging and training, is far superior to Pytorch’s Visdom
  • “Eager execution” evaluates operations immediately– all functionality of host language is available while model is executing for natural control flow and simpler debugging
  • Tensorflow Probability -”Python library built on Tensorflow that makes it easy to combine probabilistic models and deep learning on modern hardware (TPU, GPU).”  Edward2 has been incorporated into this to allow deep probabilistic models, VI, and MCMC.
  • Archit uses Tensorflow for spatial matrix factorization among other things. He says it is harder for inference to go wrong in Tensorflow because you have to define the entire computation graph of the model before running it (Tensorflow uses a static computational graph, although it has a way of implementing a dynamic one using another library)
  • Andy used tensorflow to compute gradients to fit a genomics model and a computer vision deep learning model. 



“A library for probabilistic modeling, inference, and criticism”

  • Built on TensorFlow
  • Supports modeling with directed graphical models, implicit generative models, NNs, bayesian nonparametrics and probabilistic programs.
  • Supports inference with VI, MC, EM, ABC, and message passing
  • Posterior predictive checks and point-based evaluations
  • Archit and Allison use Edward. Archit says Edward is great to set up VI without having to write out KL divergence or reconstruction error yourself.
  • Andy used Edward to fit a probabilistic model without having to write out all the variational updates
  • Others in lab have stated that Edward is hard to use, but now that it’s integrated into TensorFlow Probability, this may be resolved



“High-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK mor Theano”

  • Good for standard CNNs, RNNs, function approximation, etc
  • Hard to adapt if you need to build custom architecture
  • Niranjani has used Keras but states that she would switch to PyTorch in the future


Greg G also sent a link to a reddit post:

Setting up and running hail on cluster (also featuring Apache Spark)

According to, “Hail is an open-source, scalable framework for exploring and analyzing genomic data. Starting from genetic data in VCF, BGEN or PLINK format, Hail can, for example:

load variant and sample annotations from text tables, JSON, VCF, VEP, and locus interval files
generate variant annotations like call rate, Hardy-Weinberg equilibrium p-value, and population-specific allele count
generate sample annotations like mean depth, imputed sex, and TiTv ratio
generate new annotations from existing ones as well as genotypes, and use these to filter samples, variants, and genotypes
find Mendelian violations in trios, prune variants in linkage disequilibrium, analyze genetic similarity between samples via the GRM and IBD matrix, and compute sample scores and variant loadings using PCA
perform variant, gene-burden and eQTL association analyses using linear, logistic, and linear mixed regression, and estimate heritability”

I have finished setting up hail to run on the cluster, and this document summarizes what needs to be done in order to run hail on cluster, in both standalone and cluster modes.

These links will also prove to be useful – I recommend reading through them first:

But it must be noted that I had to do a lot of hand-setting configurations to make this work.

First, it is worth noting that the spark distribution we have on cluster is only compatible with python 2 – if you have a default python3 directory and PYTHONPATH set, you may need to disable these.

You can double check the versions of python, ipython and jupyter being used – first, run:
module load anaconda
module load spark/hadoop2.6/2.1.0

Take note that although the spark tutorial from CSES states that you should run module load python, I found that you actually need to run module run anaconda for data structures necessary for hail to run.

These commands will load all the necessary binaries. You’ll see that this command also needs to be included in the .slurm file.

[bj5@della5 ~]$ which spark-submit
[bj5@della5 ~]$ which python
[bj5@della5 ~]$ which ipython
[bj5@della5 ~]$ which jupyter

As for the version of hail, instead of building hail from source, CSES suggested that I download the pre-built distribution (compatible with spark 2.1.0) from

the hail directory is now located at:
/tigress/BEE/spark_hdfs/hail. Disregard the hadoop and spark directories in spark_hdfs – we’ll stick to the modules installed by CSES at Della.

Now, we need to set a few configurations at ~/.bashrc (or ~/.bash_profile, depending on how your directory is set up)

export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk
export PATH=$PATH:$JAVA_HOME/bin
# These env vars added for Spark – change to fit your appropriate spark version
export SPARK_HOME=/usr/licensed/spark/spark-2.1.0-bin-hadoop2.6/
# These env vars added for hail
export HAIL_HOME=/tigress/BEE/spark_hdfs/hail
export PATH=$PATH:$HAIL_HOME/bin
# set this to jupyter if running a jupyter notebook – otherwise set it to python
# export PYSPARK_DRIVER_PYTHON=/usr/licensed/anaconda/5.0.1/bin/jupyter
export PYSPARK_DRIVER_PYTHON=/usr/licensed/anaconda/5.0.1/bin/python
export PYSPARK_PYTHON=`which python`
# Set this forwarding if running a jupyter notebook
# export PYSPARK_DRIVER_PYTHON_OPTS=”notebook –no-browser –port=8889 –ip=″
export PYTHONPATH=”$HAIL_HOME/python:$SPARK_HOME/python:$(echo ${SPARK_HOME}/python/lib/py4j-* | tr ‘\n’ ‘:’)$PYTHONPATH”
export SPARK_CLASSPATH=$HAIL_HOME/jars/hail-all-spark.jar

Take note that the environment variable PYSPARK_DRIVER_PYTHON needs to be set differently depending on whether you’re running standalone mode with jupyter notebook or cluster mode. PYSPARK_DRIVER_PYTHON_OPTS also needs to be set in order to allow for ssh tunneling to run the jupyter notebook (but not in cluster mode). Also take note the JAVA_HOME directory – setting this is not mentioned by the CSES spark tutorial, but I’ve found that setting JAVA_HOME to another directory makes spark not work.

Let’s start by running cluster mode – submitting a hail job on cluster. but before doing so, I created a .zip file of the directory /tigress/BEE/spark_hdfs/hail/python/hail in order to give it to the spark-submit configuration –py-files hail/python/ This is my test.slurm script:

#SBATCH -t 10:00:00
#SBATCH –ntasks-per-node 3
#SBATCH –cpus-per-task 2

module load anaconda
module load spark/hadoop2.6/2.1.0

spark_core_jars=( “${SPARK_HOME}/jars/spark-core*.jar” )
if [ ${#spark_core_jars[@]} -eq 0 ]
echo “Could not find a spark-core jar in ${SPARK_HOME}/jars, are you sure SPARK_HOME is set correctly?” >&2
exit -1

echo $MASTER
spark-submit –total-executor-cores 6 –executor-memory 5G \
–jars hail/jars/hail-all-spark.jar \
–py-files hail/python/

This script submits a job with 6 cores (1 node, 3 tasks per node, and 2 cpus per task). Adding the configurations total-executor-cores, –executor-memory were not detailed in the hail tutorial but they are suggested by the CSES tutorial.

My script imported a test vcf file and saved it in a vds format used by hail:

from hail import *
hc = HailContext()

This part is pretty simple – you just need to remember that our cluster doesn’t have a dedicated HDFS or other file system used by Spark – so all file addresses need to be prefixed by file://.

The part that gave me the most trouble (and is still not fully resolved) is running a spark-enabled jupyter notebook.

First, we need to start up a spark cluster that is idling:

#SBATCH -t 10:00:00
#SBATCH –ntasks-per-node 3
#SBATCH –cpus-per-task 2

module load anaconda
module load spark/hadoop2.6/2.1.0

spark_core_jars=( “${SPARK_HOME}/jars/spark-core*.jar” )
if [ ${#spark_core_jars[@]} -eq 0 ]
echo “Could not find a spark-core jar in ${SPARK_HOME}/jars, are you sure SPARK_HOME is set correctly?” >&2
exit -1

echo $MASTER
sleep infinity

Then, we need to check the slurm output file, which will have a line that looks like this:
Starting master on spark://della-r1c3n12:7077
starting org.apache.spark.deploy.master.Master, logging to /tmp/spark-bj5-org.apache.spark.deploy.master.Master-1-della-r1c3n12.out
Starting slaves

This means that a spark cluster has been instantiated at spark://della-r1c3n12:7077

Now, remembering to reset PYSPARK_DRIVER_PYTHON and PYSPARK_DRIVER_PYTHON_OPTS variables (. ~/.bashrc after editing the file – unfortunately if you set env variables in .slurm script, jupyter is not able to see it), you can run:

pyspark –master spark://della-r1c3n12:7077 –total-executor-cores 6 –conf spark.sql.files.openCostInBytes=1099511627776 –conf spark.sql.files.maxPartitionBytes=1099511627776 –conf spark.hadoop.parquet.block.size=1099511627776

making sure that the master address is correctly set. The conf variables spark.sql.files.openCostInBytes, maxPartitionBytes, spark.sql.files.maxPartitionBytes and spark.hadoop.parquet.block.size were recommended to be set only for cloudera clusters in the hail tutorial, but hail doesn’t work if we don’t set these variables.

Now, from the local machine, you can run: ssh -N -f -L localhost:8889:localhost:8889

Now, you can access the spark and hail-enabled jupyter notebook from your local machine, available at – you may need to enter in the token value that is output by jupyter if this is your first time accessing it as well – in my case, it looked like this:

Another important distinction is that in order to start hail, you need to run:

from hail import *
hc = HailContext(sc)

with the pre-defined sparkContext variable sc.

However, there still is a problem with doing operations on data tables – with the following error:
vds =‘test.vds’)

IllegalArgumentException Traceback (most recent call last)

IllegalArgumentException: u”Error while instantiating ‘org.apache.spark.sql.hive.HiveSessionState’:”

It seems like the backend sql with Hive is not being properly initialized. For purposes of development, I recommend installing hail locally on your machine and work with a small subset of genotypes, expression values, etc, until the issue with jupyter notebooks is resolved, with a lot of custom configurations. But make sure to install compatible versions of JRE (or JDK), hadoop and hail. The github page for hail:

states that it is also compatible with spark-2.2.0, so building from source using the github directory may be a good option as well – this is what I ended up doing on my local machine.

But the good news is that now hail jobs with spark can be set up and run on cluster!

Fragile Family Scale Construction

The workflow for creating scale variables for the Fragile Family data is broken into four parts.
Here, we describe the generation of the Social skills Self control subscale.
I highly recommend for you to open the scales summary document at /tigress/BEE/projects/rufragfam/data/fragile_families_scales_noquote.tsv with some spreadsheet viewing software (e.g. excel) and one or more of the scales documents for years 1, 3, 5, and 9:
First, SSH into the della server and cd into the Fragile Family restricted use directory
cd /tigress/BEE/projects/rufragfam/data

  • Step 1: create the scale variables file. Relevant script: sp1_processing_scales.ipynb or This python script first obtains the prefix descriptors for individual categories. That is, in the scales documentation, a group of questions is labeled as being asked of the mother, father, child, teacher, etc… Each one of these has an abbreviation. The raw scale variables file can be accessed with

    less /tigress/BEE/projects/rufragfam/data/fragile_families_scales_noquote.tsv

    It is useful to have this file open with some spreadsheet or tab delimited viewing software to get an idea of how the data is structured. Next, it creates a map between each prefix descriptor and fragile family raw data file. It then, through some automated and manual work, attempts to match all variables defined in the scale documentation with the raw data files. After this automated and manual curation, 1514 of the scale variables defined in the PDFs could be found in the data, and 46 could not.

    This step only needs to be run if there are additional scale documents available, for instance, when the year 15 data is released. And the year 15 scale variables need to be added to the fragile_families_scales_noquote.tsv file prior to running this step.

  • Step 2: creating scales data table from raw data table + step 1 output
  • Relevant script: sp2_creating_scales_data.ipynb or This script takes the scale variables computed in part 1 and converts them into data tables for each scale. The output is stored in the tab delimited files

    ls -al /tigress/BEE/projects/rufragfam/data/raw-scales/

    The output of this step still contains the uncleaned survey responses from the raw data. For any scale, there are a large number of inconsistencies and errors in the raw data. These need to be cleaned before we can do any imputation or scale conversion. Similarly to step 1, this step only needs to be done if new scales documentations are released and only after updating fragile_families_scales_noquote.tsv.

  • Step 3: data cleaning and conversion of fragile families format to a format that can actually be run through imputation software.
  • Relevant script: sp3_clean_scales.ipynb or

    All unique responses to questions for a scale, e.g. Mental_Health_Scale_for_Depression, can be computed with

    cd /tigress/BEE/projects/rufragfam/data/raw-scales

    awk -F"\t" '{ print $4 }' Mental_Health_Scale_for_Depression.tsv | sort | uniq

    Unfortunately, there doesn’t seem to be an automated way to do this so I recommend going through the scale documents and the question/answer formats.

    The FF scale variables and the set of all response values they can take can be found in the file:

    The FF variable identifiers and labels (survey questions) can be found in the file:

    To add support for a new scale, the replaceScaleSpecificQuantities function needs to be updated to encode the raw response values with something meaningful. For instance, for the social skills self control subscale, we process Social_skills__Selfcontrol_subscale.tsv and replace values we wish to impute with float(‘nan’), and the result of the values are replaced according to the ff_scales9.pdf documentation. The cleaned scales will be generated in the directory /tigress/BEE/projects/rufragfam/data/clean-scales/

  • Step 4: compute the scale values
  • Relevant script: sp4_computing_scales.ipynb or From the cleaned data and the procedures defined in the FF scales PDFs, we can reconstruct scale scores. To add support for your scale, add in your scale to the if scale_file if statement block. For example, the Social_skills__Selfcontrol_subscale.tsv scale is processed by first imputing the data and then summing up the individual counts across survey questions for each wave. The final output file with all the scale data will be stored in /tigress/BEE/projects/rufragfam/data/ff_scales.tsv.

    We are currently using an implementation of multiple imputation by chained equations but other methods can be tested. See
    Also, this is a great resource for imputation in the Fragile Families data.

After adding in your scale in Steps 3 and 4, you can use the ff_scales.tsv file for data modeling. This is where it gets interesting!

Installing IPython notebook on Della – the conda way (featuring Python 3 and IPython 4.0, among other things)

Thanks to Ian’s previous post, I was able to set up IPython notebook on Della, and I’ve been working extensively with it. However, when I was trying to sync the notebooks between the copies on my local machine and Della, I found out that the version of IPython on Della is the old 2.3 version, and that IPython is not backward compatible. So any IPython notebook that I create and work on in my local directory will simply not work in Della, which is quite annoying.

Also, I think there is a lot of benefit to setting up and using Anaconda in my Della directory. It sets up a lot of packages (including Python 3, instead of the archaic 2.6 that Della runs; you have to module load python as Ian does in his post in order to load 2.7) and manages them seamlessly, without having to worry about what package is in what directory.

According to the Conda website:

Conda is an open source package management system and environment management system for installing multiple versions of software packages and their dependencies and switching easily between them. It works on Linux, OS X and Windows, and was created for Python programs but can package and distribute any software.

So, let’s get started. First, go to:

And download the latest Linux x86-64 version, namely:

Then, scp the Miniconda installer to your Della local directory (e.g. /home/

Note: I initially tried using the easy_install way of installing conda, only to run into the following error:

Error: This installation of conda is not initialized. Use 'conda create -n
envname' to create a conda environment and 'source activate envname' to
activate it.

# Note that pip installing conda is not the recommended way for setting up your
# system. The recommended way for setting up a conda system is by installing
# Miniconda, see:

It indeed is preferable to follow their instructions. Then run:


And follow their instructions. Conda will install a set of basic packages (including python 3.5, conda, openssl, pip, setuptools, only to name a few useful packages) under the package you specify, or the default directory:


It also modifies the PATH for you so that you don’t have to worry about that yourself. How nice of them. (But sometimes you might need to specify the default versions of programs that are on della, especially for distributing jobs to other users, etc. Don’t forget to specify them when needed. But you should be set for most use cases.)

Now, since we are using the conda package version of pip, by simply running,

pip install ipython
pip install jupyter


conda install ipython
conda install jupyter

conda will integrate these packages into your environment. Neat.

That’s it! You can double check what packages you have by running:

conda list

After this, the steps for having the notebook serve the notebook to your local browser is identical to the previous post. Namely:

#create mycert.pem using the following openssl cmd:

openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem

# my mycert.pem wherever you’d like

mv mycert.pem ~/local/lib/

# create an ipython profilename

ipython profile create profilename

# generate a password using the following ipython utility:

from IPython.lib import passwd
Enter password:
Verify password:

#copy this hashed pass

vi /home/

# edit:

c.NotebookApp.port = 1999 # change this port number to something not in use, I used 2999
c.NotebookApp.password = 'sha1:…' #use generated pass here
c.NotebookApp.certfile = u'/home/'
c.NotebookApp.open_browser = False
c.NotebookApp.ip = ''



# sign off and sign back on to della

ssh -A -L<Your Port #>:<Your Port #>

# boot up notebook

ipython notebook --ip= --profile=profilename --port <Your Port #>
# note that if you are trying to access Della
# from outside the Princeton CS department, you
# may have to forward the same port from your home computer
# to some princeton server, then again to Della

# In your browser go to<Your Port #>


After you’ve set everything up, you can upload the ipython notebook to gist for sharing with others. I’ll repeat the post:upload-a-gist-to-github-directly-from-della for convenience:

# First, install gist gem locally at della
gem install –user-install gist
echo ‘export PATH=$PATH:/PATH/TO/HOME/.gem/ruby/1.8/bin/’ >> ~/.bashrc
source ~/.bashrc

# Boot up connection
gist –login
[Enter Github username and password]

# Upload gist, e.g.
gist my_notebook.ipynb -u [secret_gist_string]

# secret_gist_string is the string already associated with a particular file on Github
# To obtain it, the first time you upload a file to Github (e.g. my_notebook.ipynb) go to
# | Gist | Add file [upload file] | Create secret Gist, which will
# return a secret_gist_string on the panel at right (labeled “HTTPS”)

Here is an example ipython notebook that I shared through gist and is available for viewing:

GTEx eQTL detection: cis- pipeline

Upload a gist to github directly from della


# First, install gist gem locally at della
gem install –user-install gist
echo ‘export PATH=$PATH:/PATH/TO/HOME/.gem/ruby/1.8/bin/’ >> ~/.bashrc
source ~/.bashrc

# Boot up connection
gist –login
[Enter Github username and password]

# Upload gist, e.g.
gist my_notebook.ipynb -u [secret_gist_string]

# secret_gist_string is the string already associated with a particular file on Github
# To obtain it, the first time you upload a file to Github (e.g. my_notebook.ipynb) go to
# | Gist | Add file [upload file] | Create secret Gist, which will
# return a secret_gist_string on the panel at right (labeled “HTTPS”)