Silicon Valley Bank
Principal Data Engineer (Finance)
About SVB:
Make Next Happen Now. For over 30 years, Silicon Valley Bank (SVB) has helped innovative companies and their investors move bold ideas forward, fast. SVB provides targeted banking services to companies of all sizes in innovation centers around the world. In fact, a majority of the most compelling Fintech disruptors bank with us. Our clients include: Beyond Meat, Shopify, HelloFresh, Cloudera, and Andreessen Horowitz, just to name a few. We help our clients grow, too. More than half of the innovation companies that completed an IPO in the last two years are our clients.
This is a unique career opportunity is for a highly experienced Data Engineer with broad based data skills in designing & building data warehouses, data lakes and real-time and batch data integrations. The successful candidate should have hands-on implementation experience in Big Data technologies, event processing frameworks and ETL tools.
The Data Engineering team at Silicon Valley Bank is responsible for delivering data solutions that support all lines of business across the organization. This includes providing data integration services for all batch data movement; managing and enhancing the data warehouse, Data Lake and dependent data marts; providing support for analytics and business intelligence consumers.
About the Role:
As Principal Data Engineer you will be responsible for building and maintaining the cloud data platform that support data integrations: building data pipelines to share data enterprise data, designing and building an AWS data lake with appropriate data access, data security, data privacy and data governance. Lead a team of Data Engineers, to maintain the platform constantly upkeeping it to be in-line with new technologies. Use agile engineering practices and various data development technologies to rapidly develop creative and efficient data products.
Work closely with the Data Architects, Security, Infrastructure and SVB Cloud team to enhance the Data platform design, constantly identify a backlog of tech debts in line with identified upgrades, provide technical solutions & implement the same. Identify inefficiencies, optimize processes and data flows, and make recommendations for improvements.
A strong candidate will use skills in Cloud Orchestration, Cloud Monitoring and Support, Compliance, IT Asset management
Manage deliverables of developers, perform design reviews and coordinate release management activities.
Ideal Experience
6 years of experience in System Operations, DevOps or related fields
3 years of experience in maintaining the cloud infrastructure like EC2 servers, RDS, manage platform usage and sizing for these services
3 years of experience in designing & building infrastructure as a code scripts that support our data platform needs. Experience in creating CI/CD pipelines and deploy code across environments
Experience with maintaining public cloud-based data platforms especially for robust fail-over & high availability
Experience in designing data privacy related constructs to enable data tokenization, encryption & data access controls on a public cloud data lake
Experience in managing Data Engineering related AWS services like Lake formation, Glue, S3, Lambda, DynamoDB and other AWS accelerators
Identify inefficiencies, optimize processes and data flows, and make recommendations for improvements
Communicate with other developers across teams, both as ad hoc problem solving, and check-ins and discussions with other initiatives
Support non-technical team members in understanding the technical implications of design decisions
Manage deliverables of junior cloud engineers, perform technical reviews and aid in performing technical platform audits
Technology Skills
Cloud Environment Experience (AWS), Containerization is a plus
Infrastructure as a code experience - Terraform, Puppet, Ansible
CI/CD pipelines - AWS Code, DevOps skills
Network Security, Data Access Management tools