\nWe're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving over a million Hims & Hers users.\nYou Will:\n\n\n* Architect and develop data pipelines to optimize performance, quality, and scalability\n\n* Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources\n\n* Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake\n\n* Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance\n\n* Orchestrate sophisticated data flow patterns across a variety of disparate tooling\n\n* Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics\n\n* Partner with the rest of the Data Platform team to set best practices and ensure the execution of them\n\n* Partner with the analytics engineers to ensure the performance and reliability of our data sources\n\n* Partner with machine learning engineers to deploy predictive models\n\n* Partner with the legal and security teams to build frameworks and implement data compliance and security policies\n\n* Partner with DevOps to build IaC and CI/CD pipelines\n\n* Support code versioning and code deployments for data Pipelines\n\n\n\nYou Have:\n\n\n* 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages\n\n* Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed\n\n* Demonstrated experience writing complex, highly optimized SQL queries across large data sets\n\n* Experience with cloud technologies such as AWS and/or Google Cloud Platform\n\n* Experience with Databricks platform\n\n* Experience with IaC technologies like Terraform\n\n* Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres\n\n* Experience building event streaming pipelines using Kafka/Confluent Kafka\n\n* Experience with modern data stack like Airflow/Astronomer, Databricks, dbt, Fivetran, Confluent, Tableau/Looker\n\n* Experience with containers and container orchestration tools such as Docker or Kubernetes\n\n* Experience with Machine Learning & MLOps\n\n* Experience with CI/CD (Jenkins, GitHub Actions, Circle CI)\n\n* Thorough understanding of SDLC and Agile frameworks\n\n* Project management skills and a demonstrated ability to work autonomously\n\n\n\nNice to Have:\n\n\n* Experience building data models using dbt\n\n* Experience with Javascript and event tracking tools like GTM\n\n* Experience designing and developing systems with desired SLAs and data quality metrics\n\n* Experience with microservice architecture\n\n* Experience architecting an enterprise-grade data platform\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Docker, Testing, DevOps, JavaScript, Cloud, API, Senior, Legal and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Who You Are\n\nPhaidra is looking for a driven Senior Software Engineer to be a part of our Infrastructure team. You are bold and creative, and have deep empathy for customers who may not be tech-savvy. You will work directly with our Infrastructure lead to build and maintain world class infrastructure. You will have the opportunity to make an immediate impact with your work and guide the product and team as we grow.\n\nWe are seeking a team member located within one of the following areas: USA\nResponsibilities\n\nThe ideal candidate has expertise building self service Internal Developer Platforms for deploying and managing cloud infrastructure on AWS/GCP/Azure and ideally Kubernetes Operators. Your responsibilities will touch on building Developer Portals, Developer Experience & Tooling Infrastructure Engineering and DevOps. Your work will empower developers to be self sufficient in infrastructure related operations.\n\n\n* You will build an internal developer portal and tooling for abstracting infrastructure with a self service approach.\n\n* You will work closely with developers to identify infrastructure pain points and build the platform accordingly\n\n* You will help build and manage infrastructure for:\n\n\n* Large-scale data ingestion and processing.\n\n* Distributed model training, evaluation and inference.\n\n* Automating the end-to-end system for continuous improvement and deployment.\n\n\n\n\n\n* You will work with cloud services like AWS, Azure, GCP.\n\n* You will work with Cloud Native technologies like Kubernetes.\n\n* You will help build CI/CD pipelines and take part in DevOps duties.\n\n* You will write and maintain tooling and documentation for infrastructure, supported applications and processes.\n\n* You will apply SRE principles for observability, automation and change management.\n\n* Build and maintain cross-functional relationships with internal teams to drive initiatives.\n\n\n\nKey Qualifications\n\n\n* Bachelors or Masters in Computer Science, or equivalent experience.\n\n* Proven software engineering experience, ideally with Python or Go.\n\n* Experience with Internal Developer Platform products such as Backstage, Port or Upbound.\n\n* Experience working with developers with a focus on infrastructure automation.\n\n* Proven experience automating Cloud on AWS, GCP or Azure.\n\n* Experience developing Kubernetes Operators and general Kubernetes related automation.\n\n* Good understanding of Linux-based Operating Systems, Containerisation and Orchestration technologies like Docker and Kubernetes.\n\n* Good understanding of DevOps and SRE principles.\n\n* Experience with Terraform or other configuration management tools like Jsonnet, Kapitan, Helm or Kustomize.\n\n* Share our company values: curiosity, ownership, transparency & directness, outcome-based performance, and customer empathy.\n\n\n\nOur Stack\n\n\n* Languages - (Backend) Python, Go; (Frontend) JavaScript/TypeScript, React; Customer SDK & Clients - C# .NET\n\n* PyTorch\n\n* Cypress\n\n* Docker, Kubernetes, Terraform & Kapitan\n\n* Gitlab CI, ArgoCD, Atlantis, Vercel\n\n* GCP - GKE, PubSub, CloudSQL, BigTable, Postgres, etc.\n\n* Ray.io\n\n* REST & gRPC micro-services\n\n* Poetry, Pantsbuild\n\n\n\nPreferred Skills & Experience\n\n\n* Experience with Software Engineering.\n\n* Experience developing Internal Developer Platforms and tooling.\n\n* Expertise with multi and hybrid cloud environments.\n\n* Expertise with some parts of our tech stack is a big plus.\n\n* Experience in automating scalable multi-tenant systems architectures with high availability, fault tolerance, performance tuning, monitoring, and statistics/metrics collection.\n\n\n\nOnboarding\n\nIn your first 30 days...\n\n\n* You will be immersed in an onboarding program that introduces you to Phaidra and our product.\n\n* You will spend time in the Engineering org, learning how the teams operate, interact, and approach problems.\n\n* You will read various parts of our handbook and familiarize yourself with the documentation culture at Phaidra.\n\n* You will set up your development environment and start working on an onboarding exercise that will introduce you to various parts of our code and infrastructure base.\n\n* You will learn about how we use agile and be able to navigate our sprint boards and backlogs.\n\n* You will learn about various team standards and development & release processes.\n\n* You will start to learn about our system architecture and infrastructure.\n\n\n\n\nBy your first 60 days...\n\n\n* You will have a solid understanding of what Phaidra does and how we do it.\n\n* You will have met with team members across Phaidra and started building relationships that will help you be successful at your job.\n\n* You will have completed the onboarding exercise and will be on your way to completing your first production task.\n\n\n\n\nBy your first 90 days...\n\n\n* You will have been fully integrated in the team and with team members across the company.\n\n* You will get a more in-depth understanding of our system architecture and infrastructure.\n\n* You will have completed your first on-call experience helping monitor and improve our production environments.\n\n* You will have become an expert with our tooling.\n\n* You will have started to contribute to knowledge sharing throughout Phaidra.\n\n\n\nBase Salary\n\n\n* US Residents: $150,000-$210,000\n\n\n\n\nThis position will also include equity.\n\nThese are best faith estimates of the base salary range for this position. Multiple factors such as experience, education, level, and location are taken into account when determining compensation. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Docker, DevOps, Cloud, Senior and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSeattle, Washington, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.