Running vibe-qc

Once you’ve finished installation, every vibe-qc calculation is a Python script you run with the virtual-env’s Python. This page covers the common patterns: one-off interactive runs, longer batch runs, controlling threading, and capturing output.

The fundamental rule

vibe-qc is a Python package installed into the .venv/ you created during installation. Any time you want to import it (import vibeqc), you need to use the Python inside that venv, not your system Python:

.venv/bin/python my_script.py        # works
python3 my_script.py                  # ModuleNotFoundError: No module named 'vibeqc'

If you forget and use the wrong Python, the error you’ll see is:

Traceback (most recent call last):
  File "my_script.py", line 1, in <module>
    import vibeqc
ModuleNotFoundError: No module named 'vibeqc'

The fix is always the same: use .venv/bin/python (or activate the venv first — see below).

Two ways to invoke the venv’s Python

Option A — explicit path (the safe default)

.venv/bin/python water.py

Works from any directory as long as the path to .venv/ resolves. No state changes in your shell. Recommended for one-offs and for anything you’ll forget about and come back to in a week.

Option B — activate the venv

source .venv/bin/activate    # bash / zsh; fish: source .venv/bin/activate.fish
python water.py              # 'python' now means the venv's python
deactivate                   # restore the original PATH when done

Convenient if you’re going to run many vibe-qc commands in a single shell session. The activation persists until you deactivate or close the terminal. Recommended for interactive exploration and long debugging sessions.

Tip

Activation does not propagate to new shells. If you ssh into a machine, open a new tmux pane, or start a Jupyter kernel, you’ll need to activate again (or use Option A explicitly).

Running a one-off calculation

Save your script, run it, look at the output:

.venv/bin/python water.py

Most vibe-qc scripts produce both stdout output (banner + SCF trace

  • orbital tables) and side-effect files (.out logs, .cube cube files, .molden orbitals, .traj trajectories). The side-effect files land in your current working directory, so cd to the directory where you want them before running.

mkdir -p ~/calculations/water-pbe
cd ~/calculations/water-pbe
.venv/path/to/.venv/bin/python ~/path/to/water.py
ls
# water.out  water.molden  water.traj

Running interactively (Jupyter / IPython)

The venv’s python doesn’t ship Jupyter by default. Install it into the venv if you want notebooks:

.venv/bin/pip install notebook ipykernel
.venv/bin/jupyter notebook

For a more lightweight REPL, use IPython:

.venv/bin/pip install ipython
.venv/bin/ipython

Both will pick up vibeqc automatically because they’re using the venv’s Python under the hood.

Capturing output (long runs, batch logs)

For runs that take more than a minute, you’ll want both the live output AND a file you can grep through later:

.venv/bin/python water.py 2>&1 | tee water.log

tee mirrors everything to both the terminal and the file. The 2>&1 makes sure error messages also get captured (otherwise they go to stderr and miss the file).

For runs you want to leave going while you do other things, nohup + &:

nohup .venv/bin/python water.py > water.log 2>&1 &
echo $!                               # prints PID; remember it
disown                                # detach from the shell
# log out, come back, check on it:
tail -f water.log

nohup ensures the process survives the parent shell exiting (so SSH disconnects don’t kill it). disown removes it from the shell’s job table so it doesn’t get a SIGHUP at logout.

Controlling threading

vibe-qc parallelises hot kernels (ERI builds, DFT grid integration, gradient evaluation) via OpenMP. By default it uses all logical cores. To pin to a specific count:

OMP_NUM_THREADS=4 .venv/bin/python water.py

…or from inside Python:

import vibeqc as vq
vq.set_num_threads(4)

For molecular calculations on small systems (< 50 basis functions) the parallel speedup plateaus around 4 threads — using more wastes cores. See tutorial 18: parallel execution for the scaling curves.

Running on a remote machine via SSH

The pattern is the same as local — the only difference is that you SSH into the machine first:

ssh you@compute-server.example.com
cd ~/path/to/vibeqc-checkout
.venv/bin/python ~/path/to/calculations/water.py

For long runs you don’t want to babysit, combine tmux with nohup:

ssh you@compute-server.example.com
tmux new -s vibeqc                                 # start a session
.venv/bin/python water.py 2>&1 | tee water.log     # in the session
# Ctrl-B, D to detach. The session keeps running.
# Reconnect later:
ssh you@compute-server.example.com
tmux attach -t vibeqc

tmux is the more modern alternative to screen; either works. Output capture (tee to file) is still recommended even inside tmux because tmux scrollback is limited and ephemeral.

Personal job queue (planned, v0.5)

For users with a typical “laptop + bigger machine” setup — develop on a laptop, SSH to a gaming PC / lab box / in-house server with more cores and RAM, submit a batch of calculations, walk away, get notified when they finish — vibe-qc is gaining a personal job queue (Phase JQ1, scheduled for v0.5). It will provide:

  • vqc-jobs submit script.py to enqueue a calculation

  • A worker that runs jobs in order with the right venv-Python and thread count, capturing each job’s stdout / stderr to log files

  • vqc-jobs ls / show / cancel for queue introspection

  • Notifications (ntfy.sh push, notify-send desktop, email) when a job finishes or fails

  • Restart-on-failure for the SCF-didn’t-converge-this-time case

File-system-backed (one JSON file per job under ~/.vibeqc/jobs/); no daemon, no database, no Redis. Targets the single-developer-with-a-bigger-machine workflow, not multi-user shared infrastructure.

Until JQ1 ships, the supported pattern is tmux + manual invocation as covered above.

Running on a cluster with a job queue

Coming after JQ1 (Phase JQ2). Real production calculations on multi-user clusters need proper batch-queue submission (Slurm, PBS / Torque, LSF, SGE) plus patterns for:

  • Running across multiple nodes via OpenMP per node (vibe-qc is shared-memory parallel; no MPI yet)

  • Time-limit handling (checkpointing for SCF restarts is on the roadmap)

  • Memory-budget hints (vibe-qc has a pre-flight vq.estimate_memory() but cluster schedulers want a single number; we’ll document the conversion)

  • File-system layout for shared $SCRATCH vs $HOME

For now, single-machine local or single-machine SSH is the supported pattern.

Running the test suite

After install, confirm everything works:

.venv/bin/python -m pytest tests/

Should report ~880 tests passing in 2-3 minutes on a recent box. This is also the first thing to run if you’ve just rebuilt one of the native deps (e.g., after git pull brings in a new libint patch) — it’ll catch any ABI mismatches immediately.

Common errors and quick fixes

Error

Cause

Fix

ModuleNotFoundError: No module named 'vibeqc'

Used system python3 instead of .venv/bin/python

Use the venv’s Python. See the fundamental rule above.

RuntimeError: BasisSet: no shells loaded for basis 'X'

Basis name typo or unsupported basis

Check spelling; see basis sets user guide for the supported list

OSError: cannot load library 'libxc.so.7'

One of the vendored deps wasn’t built

Re-run ./scripts/setup_native_deps.sh from the repo root

SCF did not converge in N iterations

Hard-to-converge system

See tutorial 24: SCF convergence for level shift / smearing

MemoryError or OOM kill mid-SCF

Basis × system size exceeds available RAM

Run vq.estimate_memory(mol, basis) first; see memory user guide

Where to go next

  • Quickstart — 4-step end-to-end walkthrough using the patterns on this page

  • Tour — wider API surface (run_job, ASE Calculator, logging, custom basis sets)

  • Tutorial index — 25 task-oriented worked examples with theory layers

  • Examples directory — runnable input scripts in the style of classic QC programs