Menu

Senior Data Engineer

Apply Now

Developing innovative technologies to revolutionize the payments industry while helping customers transact in global marketplaces is not for the faint of heart. We have big goals and are looking for people to join our team who want to leave a legacy. Just as you are committing to do your best work, we are committed to making this the best place you’ve ever worked. It’s a partnership from the very beginning. If you are looking to step outside your comfort zone, learn new things, apply your skills, collaborate with brilliant people and have fun along the way, then you might be our next Yapster! We promise to provide you with an amazing journey along your career. At Yapstone, we don’t just accept difference — we celebrate it, we support it, and we thrive on it for the benefit of our employees. Yapstone is proud to be an equal opportunity workplace.


Yapstone is looking for a Senior Data Engineer to be part of the Enterprise Data Services Team. The Data Services team works very closely with all aspects of data, both internal and external. We are looking for a Sr. Data Engineer with the Software Engineering skills to build data pipelines for efficient and reliable data movement across systems, and also to build the next generation of data tools in public cloud to enable us to take full advantage of this data. In this role, your work will broadly influence the company's data consumers, executives and analysts.


Primary Responsibilities

  • Design, build and launch extremely efficient & reliable data pipelines to move data to our Data Lake/Warehouse

  • Applies Data Management best practices applying robust data reconciliation, quality and exception management during and after implementation

  • Design and develop complex ETL/ELT routines to populate modern databases from multiple disparate data sources including streaming and batched datasets

  • Applies analytical mindset to identify data patterns and help design data models that enable data driven insights and intuitive dashboards

  • Has a Customer-first and Team-first mindset to listen, collaborate; turning these requirements into easy-to-use solutions

  • Actively participates in contributing to data and privacy standards and applying these principles during development

  • Continuously works to identify process optimization and automation techniques to improve process efficiencies, scalability, reliability, availability and timely delivery

Primary Responsibilities

  • Design, build and launch extremely efficient & reliable data pipelines to move data to our Data Lake/Warehouse

  • Applies Data Management best practices applying robust data reconciliation, quality and exception management during and after implementation

  • Design and develop complex ETL/ELT routines to populate modern databases from multiple disparate data sources including streaming and batched datasets

  • Applies analytical mindset to identify data patterns and help design data models that enable data driven insights and intuitive dashboards

  • Has a Customer-first and Team-first mindset to listen, collaborate; turning these requirements into easy-to-use solutions

  • Actively participates in contributing to data and privacy standards and applying these principles during development

  • Continuously works to identify process optimization and automation techniques to improve process efficiencies, scalability, reliability, availability and timely delivery

Requirements

  • BS or MS in Computer Science, Information Management, or related field

  • 8+ years of progressive experience as a Data Engineer

  • Demonstrated experience with inhouse or Cloud data platforms, AWS S3 and other ETL/ELT environments

  • Recent experience building data pipelines with sources ranging from structured, semi-structured and streaming data via API

  • Candidate must have a deep understanding of logical and physical data modeling (star model) for OLTP and OLAP systems built for scalability

  • Ability to translate a logical data model into a relational or non-relational solution as appropriate

  • Demonstrated experience in SQL tuning, indexing, partitioning, data access patterns and scaling strategies

  • Strong Programming/Scripting experience in Windows (C#, PowerShell) as well as Unix/Linux environments (Python, Bash)

  • Proficiency with building data visualizations, KPI metrics with any of the tools like Tableau, PowerBI, Looker

  • Experience in NoSQL/Big Data technologies (Couchbase or MongoDB)

  • Experience with Jira, Confluence, SharePoint and Git

  • Excellent analytical problem solving and decision-making skills

  • Experience working with large complex sets of data in a high-availability environment

  • Experience with agile methodology process and development practices

  • Self driven with the ability to work independently or in a hybrid and/or remote work setting

  • Plus to have:

  • Payments or e-commerce industry experience

  • Experience working with Informatica Intelligent Cloud Services (IICS)

  • Experience in Business Intelligence tools and technologies

  • Experience with Snowflake – Snowpipe, SnowSQL, Snowflake Procedures

  • Experience in building out BI solutions in Looker

  • Recent migration experience from MSSQL Server to Snowflake

  • Yapstone is an equal opportunities employer.

Requirements

  • BS or MS in Computer Science, Information Management, or related field

  • 8+ years of progressive experience as a Data Engineer

  • Demonstrated experience with inhouse or Cloud data platforms, AWS S3 and other ETL/ELT environments

  • Recent experience building data pipelines with sources ranging from structured, semi-structured and streaming data via API

  • Candidate must have a deep understanding of logical and physical data modeling (star model) for OLTP and OLAP systems built for scalability

  • Ability to translate a logical data model into a relational or non-relational solution as appropriate

  • Demonstrated experience in SQL tuning, indexing, partitioning, data access patterns and scaling strategies

  • Strong Programming/Scripting experience in Windows (C#, PowerShell) as well as Unix/Linux environments (Python, Bash)

  • Proficiency with building data visualizations, KPI metrics with any of the tools like Tableau, PowerBI, Looker

  • Experience in NoSQL/Big Data technologies (Couchbase or MongoDB)

  • Experience with Jira, Confluence, SharePoint and Git

  • Excellent analytical problem solving and decision-making skills

  • Experience working with large complex sets of data in a high-availability environment

  • Experience with agile methodology process and development practices

  • Self driven with the ability to work independently or in a hybrid and/or remote work setting

  • Plus to have:

  • Payments or e-commerce industry experience

  • Experience working with Informatica Intelligent Cloud Services (IICS)

  • Experience in Business Intelligence tools and technologies

  • Experience with Snowflake – Snowpipe, SnowSQL, Snowflake Procedures

  • Experience in building out BI solutions in Looker

  • Recent migration experience from MSSQL Server to Snowflake

  • Yapstone is an equal opportunities employer.

Apply Now

Our Benefits

  • We offer competitive health plans for you and your family with low employee premiums.
  • You work hard every day to build the future of our company, so we’ll help you build your future with a pension plan that features employer match.
  • We encourage and support our teams to take time off to recharge and reboot because changing how the world pays is no easy task.
  • We care about the community where we work. Through YapCares, you get 8 hours of paid volunteer time off each year to make a difference.
  • Enjoy food, fun and camaraderie with breakfasts, social hours and events.
  • A great location in Drogheda that is a reverse commute for many Yapsters.