Glen Tanner

Python Developer · Data Engineering & Automation · Airflow · SQL · ETL

Projects About Contact

About Me

I build reproducible data pipelines with Python and SQL and orchestrate them with Apache Airflow. Recent work includes a Medicare Part D drug‑spending pipeline (bash → Python → SQL → PostgreSQL → Jupyter dashboard) and a locum‑tenens expense & timesheet ETL. I value clean structure, automation, and measurable outcomes.

Earlier in my career I worked across healthcare and software systems; today I focus on practical data engineering— ingestion, transformation, and lightweight analytics that turn raw files and APIs into decision‑ready datasets.

Core Skills

Data Engineering

ETL design, batch orchestration, reliability.

  • Apache Airflow (DAGs, scheduling)
  • PostgreSQL, SQL, SQLAlchemy
  • Pandas, data validation

Automation

Workflow scripting & repeatable processes.

  • Bash, Make, Python CLIs
  • Reproducible envs & scaffolding
  • File watchers & ingestion jobs

Analytics & Viz

From raw data to decision support.

  • JupyterLab dashboards
  • Power BI / Tableau (exposure)
  • Exploratory analysis & reporting
Python SQL PostgreSQL Pandas Airflow Kafka (learning) dbt (learning)

Projects

More on GitHub →

Medicare Part D Drug Spending Pipeline

End‑to‑end pipeline: Bash → Python → SQL → PostgreSQL → Jupyter dashboard. Airflow orchestration in progress; focuses on reproducible ingestion and transformations.

  • Python
  • SQL
  • PostgreSQL
  • Airflow (planned)
View Repo

Locum Tenens Expense & Timesheet ETL

Automated ingestion and normalization of receipts and timesheets into a Postgres model, producing monthly reporting datasets and dashboards.

  • Python
  • Pandas
  • PostgreSQL
  • Jupyter
View Repo