Trab
PreçosBlog

Data Engineer na BSE Global

Presencial - United States

Candidatar-se

As a data engineer, you will help lead and scale our data engineering function, driving the implementation of scalable data pipelines, platform integrations, and data architecture best practices. You’ll build end-to-end data pipelines, sustainable data products, and integrate new platforms that drive insights for ticketing, merchandising, marketing attribution, and fan engagement. The ideal candidate is enthusiastic about data engineering, detail-oriented, and eager to collaborate closely with data analysts and stakeholders across our company. This role offers an excellent opportunity to grow technical skills in a dynamic sports organization with a focus on performance and fan engagement.

You will architect scalable, robust, and cost-effective data platforms (data lake, warehouse, streaming) on cloud services (AWS, Azure, GCP). Lead the design, development, and deployment of AI-powered solutions by leveraging platforms such as Amazon Bedrock, Snowflake Cortex, and emerging generative AI technologies to support scalable, data-driven innovation. Create and optimize data models and schemas tailored to business needs, enabling efficient data retrieval and analysis. Manage and enhance PostgreSQL databases, focusing on performance tuning, stored procedures, and data quality assurance. Foster a culture of collaboration, code quality, documentation, and continuous improvement (CI/CD, automated testing, code reviews). Lead efforts to improve data pipelines by identifying opportunities for increased efficiency, reliability, and scalability. Oversee the design, implementation, and maintenance of batch and real-time ETL/ELT processes (AWS Glue, Apache Airflow, dbt, etc.). Ensure high availability, reliability, and performance tuning of data workflows and databases (PostgreSQL, Snowflake, Databricks). Collaborate with analysts and cross-functional teams to understand data requirements and deliver actionable insights. Align with information-security, legal, and compliance teams to enforce data governance, lineage, and privacy standards. Stay abreast of emerging data technologies (stream processing, metadata management, data cataloging) and introduce best-in-class tools. Drive the integration of new data sources into our datalake, with a focus on automation, methodology improvements, and future scalability. Document workflows, data sources, and pipelines to ensure transparency and ease of use for all team members. Monitor and resolve data inconsistencies, working to enhance data accuracy and integrity. Contribute to the development of advanced attribution models to measure the effectiveness of marketing efforts. Continuously improve analytical tools, processes, and methodologies to advance the team's impact on decision-making.

Travel may be required on rare occasions (< 5% travel); may require air travel and/or overnight stay of one or more nights. Work primarily in an office environment.

We are an Equal Employment Opportunity ("EEO") Employer. It has been and will continue to be a fundamental policy of the Company not to discriminate on the basis of race, color, creed, religion/creed, gender, gender identity, transgender status, pregnancy and lactation accommodations, marital status, partnership status, domestic violence victim status, sexual orientation, age, national origin, alienage, immigration, or citizenship status, veteran or military status, disability, genetic information, height and weight, arrest or conviction record, caregiver status, credit history, unemployment status, sexual and reproductive health decisions, salary history, status as a victim of domestic violence, stalking, and sex offenses, or any other characteristic prohibited by federal, state or local laws.

Salary

USD 110,000 - 125,000/year

Requirements

Education

  • Bachelor's degree in Computer Science, Data Engineering, Information Systems, or a related field (or equivalent practical experience). Masters degree preferred.

Experience

  • 6+ years of hands‑on data‑engineering experience, including 1-2 years in a leadership role.

Skills

  • Experience with SQL, especially PostgreSQL, and a solid understanding of relational databases.
  • Proven track record of establishing and revamping best practices for reproducible research: version control (Git), code reviews, unit testing, and documentation.
  • Strong understanding of data‑warehousing concepts, ETL/ELT frameworks, and big‑data processing (Spark, Kafka, etc.).
  • Familiarity with data modeling principles and experience building data models to support analytics and reporting. Use of data modeling tools such as dbt and/or coalesce preferred.
  • Experience with at least one ETL tool or framework (e.g., AWS Glue, Apache Airflow).
  • Proficiency in python and data wrangling techniques, with a strong understanding of data manipulation, data warehousing concepts and structures.
  • Experience with Snowflake, Databricks, dbt, or similar modern data‑stack technologies.
  • Ability to document workflows clearly and contribute to knowledge sharing within the team.

Responsibilities

  • Architect scalable, robust, and cost-effective data platforms (data lake, warehouse, streaming) on cloud services (AWS, Azure, GCP).
  • Lead the design, development, and deployment of AI-powered solutions by leveraging platforms such as Amazon Bedrock, Snowflake Cortex, and emerging generative AI technologies to support scalable, data-driven innovation.
  • Create and optimize data models and schemas tailored to business needs, enabling efficient data retrieval and analysis.
  • Manage and enhance PostgreSQL databases, focusing on performance tuning, stored procedures, and data quality assurance.
  • Foster a culture of collaboration, code quality, documentation, and continuous improvement (CI/CD, automated testing, code reviews).
  • Lead efforts to improve data pipelines by identifying opportunities for increased efficiency, reliability, and scalability.
  • Oversee the design, implementation, and maintenance of batch and real-time ETL/ELT processes (AWS Glue, Apache Airflow, dbt, etc.).
  • Ensure high availability, reliability, and performance tuning of data workflows and databases (PostgreSQL, Snowflake, Databricks).
  • Collaborate with analysts and cross-functional teams to understand data requirements and deliver actionable insights.
  • Align with information-security, legal, and compliance teams to enforce data governance, lineage, and privacy standards.
  • Stay abreast of emerging data technologies (stream processing, metadata management, data cataloging) and introduce best-in-class tools.
  • Drive the integration of new data sources into our datalake, with a focus on automation, methodology improvements, and future scalability.
  • Document workflows, data sources, and pipelines to ensure transparency and ease of use for all team members.
  • Monitor and resolve data inconsistencies, working to enhance data accuracy and integrity.
  • Contribute to the development of advanced attribution models to measure the effectiveness of marketing efforts.
  • Continuously improve analytical tools, processes, and methodologies to advance the team's impact on decision-making.

Technologies

AWSAzureGCPAmazon BedrockSnowflake Cortexgenerative AIPostgreSQLSnowflakeDatabricksAWS GlueApache AirflowdbtSparkKafkaPythonGitCI/CDunit testing

Descubra se seu currículo está pronto para esta vaga

Veja como nossa IA pode otimizar seu currículo e aumentar suas chances de conseguir esta posição.

© 2026 Trab. Todos os direitos reservados.