A Python implementation of a standard Bayesian Vector Autoregression (VAR) with conjugate Normal–Inverse–Wishart priors and Minnesota hyperparameter structure. This repository provides routines for:
- Specifying and estimating a VAR(
$p$ ) model - Constructing Minnesota priors with five hyperparameters (
$\lambda_0$ –$\lambda_5$ ) - Sampling from the Normal–Inverse–Wishart posterior
- Computing Monte Carlo impulse response functions (IRFs)
- Performing forecast error variance decomposition (FEVD)
- Project Layout
- Data
- Quickstart
- Results
- Convergence Diagnostics
- API (FastAPI)
- Features
- Contributing
- License
data/raw/: raw source data (Excel)data/processed/: cleaned/resampled CSVsnotebooks/: exploratory notebooksscripts/: CLI entrypointssrc/bvar/: package modules (including the FastAPI service)outputs/: generated model outputstests/: pytest suite
The data used to run the model (to recreate the results) is located at:
data/raw/Tes-Bills Final.xlsx
The code should work with more variables and different types of time series without major complication.
Install in editable mode:
pip install -e .Prepare the quarterly data:
python - <<'PY'
import pandas as pd
df = pd.read_excel('data/raw/Tes-Bills Final.xlsx')
df['Fecha'] = pd.to_datetime(df['Fecha'])
df = df.set_index('Fecha').resample('QE').mean().reset_index()
df[['DGS5','DGS1','TES 5 años']].to_csv('data/processed/tes_bills_quarterly.csv', index=False)
PYFit the model and draw posterior samples:
python scripts/bvar_fit.py \
--data data/processed/tes_bills_quarterly.csv \
--lags 3 \
--draws 2000 \
--output outputs/fit.npzCompute IRFs, FEVD, and plots:
python scripts/bvar_infer.py \
--fit outputs/fit.npz \
--irf-horizon 35 \
--output-dir outputs \
--plots-dir outputs/plotsRun tests:
pytestExamples of outputs generated using the included data:
Impulse Response Functions (IRFs)
|
Forecast Error Variance Decomposition (FEVD)
|
Posterior densities
|
MCMC trace plots
|
Run diagnostics from an existing fit:
python scripts/bvar_diagnostics.py --fit outputs/fit.npzRun diagnostics directly from data (pipeline + at least 2 chains):
python scripts/bvar_diagnostics.py \
--data data/processed/tes_bills_quarterly.csv \
--lags 3 \
--draws 2000 \
--chains 2The diagnostics use ArviZ/PyMC to compute R-hat, effective sample sizes (ESS), and a full summary table.
Start the API:
uvicorn bvar.api:app --reload --host 0.0.0.0 --port 8000Example request (upload CSV and set lags):
curl -X POST "http://localhost:8000/estimate" \
-F "file=@data/processed/tes_bills_quarterly.csv" \
-F "lags=3" \
-F "draws=2000" \
-F "irf_horizon=35"- Hyperparameter optimization via marginal likelihood
- Posterior sampling of coefficients and covariance draws
- IRF & FEVD routines with companion-form and Cholesky factorization
- Flexible handling of lag order
$p$ and variable ordering
References:
J. Jacobo, Una introducción a los métodos de máxima entropía y de inferencia bayesiana en econometría
Contributions are welcome! Please open issues or submit pull requests at https://github.com/pablo-reyes8
This project is licensed under the Apache License 2.0.



