← Back to context

Comment by gustavofortti

6 months ago

    Location:            São Paulo, Brazil

    Remote:              Yes – open to Remote (Global), Remote (US), Remote (EU), or hybrid

    Willing to relocate: Yes – US, Canada, Europe

    Technologies:        Python, SQL, PySpark, Apache Spark, Airflow, Hadoop, NiFi, Docker, OpenShift, 
                      Azure DevOps, Argo CD, Git, Linux, Selenium, CI/CD, ETL, Data Lakes, REST APIs

    Databases:           PostgreSQL, MySQL, Redis, Elasticsearch, HBase

    Data & Analytics:    Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn

    Resume/CV:           https://www.linkedin.com/in/gustavo-d-12627416a/

    Email:               gustavofortti@gmail.com

I'm a Data Engineer with 3+ years of experience in the financial sector, having worked on large-scale data infrastructure and credit scoring systems at Santander and Bradesco. I’ve led projects involving data pipelines processing billions of records using Spark and Python, supporting systems that moved billions of dollars monthly. I also have strong experience with big data environments, distributed systems, data manipulation, automation, and production-grade pipelines.

Example Projects: - Data Pipeline – Crawler + Shopify Integration https://github.com/GustavoFortti/products-crawler (Automates crawling e-commerce data and product publishing to Shopify.)

- Low-Cost Elasticsearch Cluster Setup https://github.com/GustavoFortti/cluster-elasticsearch (Docker-based Elasticsearch cluster with Ngrok tunneling, TLS, and node discovery — built for dev/test/MVP environments.)

Open to opportunities in product-driven teams solving meaningful problems with data. Available for freelance (part-time/full-time), contract, or permanent roles – remote or relocation.