FeedbackIf you find a bug, or have feedback, put it here. Please no job applications in here, click Apply on the job instead.Thanks for the message! We will get back to you soon.

[Spam check] What is the name of Elon Musk's company going to Mars?

Send feedback
Open Startup
RSS
API
Health InsurancePost a job

find a remote job
work from anywhere

Get a  email of all new Remote Docker + Terraform Jobs

Subscribe
×

👉 Hiring for a Remote Docker + Terraform position?

Post a job
on the 🏆 #1 Remote Jobs board

Remote Health by SafetyWing


Global health insurance for freelancers & remote workers

🐳 Docker Remove this filter🌳 Terraform Remove this filter
Clear 13 results

easyMoney


verified
🇪🇺 EU
 
💰 $90k - $140k

fintech

 

php

 

nodejs

 

mysql


easyMoney is hiring a Remote Tech Lead

We are easyMoney, part of the easy family of brands. Over 90 million people used easy goods and services in 2019.\n\nOur vision is to make better financial products available to everyone. We are looking for an ambitious developer to help usextend our existing platform, build microservices, work on the core API and introducing new features on the main platform.\n\nTop 8 reasons to work with us.\n\n1. You will also work on new and exciting projects in the fintech space competing with some of the biggest disrupters globally.\n\n2. You will learn new things, every day, alongside experienced developers.\n\n3. Some pretty cool benefits such as flexible remote working, equipment of your choice and if you are based in the UK then private medical insurance & employee discounts at thousands of retailers.\n\n4. Interesting problems to solve working with money, investment and finance.\n\n5. Work in a startup but for a major brand\n\n6. We will help with your career development and help you grow beyond just coding.\n\n7. Freedom to try new technologies and be part of the decision making process.\n\n8. Your code will get into the real world quickly and you will make a difference\n\n*What we need from you?*\n\n * ability to work independently on full feature sets bringing product specifications to life.\n * be able to take ownership of problems with a determination to solve them.\n * not be afraid to learn new things outside of your comfort zone.\n * contribute to company standards and best practice\n * identify risks and mitigate them\n\n*Skills required:*\n\n * PHP and/or NodeJS for backend development\n * ReactJS and/or VueJS for frontend development\n * MySQL and/or PostgreSQL\n * API driven development (REST or GraphQL)\n * Experience leading development projects\n * Strong problem solving ability and able to articulate solutions clearly\n * Good understanding of AWS with some devops experience\n * Understanding of agile development process and practice\n\n*It would be beneficial if you have:*\n\n * General interest in the fintech space\n * Experience or interest in React Native and/or Native Script\n * Experience in web development frameworks such as Laravel, Symfony and/or ExpressJS\n * Linux and cloud infrastructure administration with devops tools such as Terraform, Jenkins, CircleCI, Docker, Kubernetes\n * Test driven development advocate\n\n**About easyMoney**\n\neasyMoney is a financial products platform, with a vision to make better financial products available to everyone. We are part of the easy family of brands which has up to 99% market recognition across Europe.\n\n*Benefits:*\n\n * Flexible working – either work from home (country of your choice) or from amazing Chelsea, London office with Gym, Ping Pong, Pool, Table Football, Bar etc.\n * Friendly and supportive team\n * Free easyMoney Plus membership (discounts at 1,500+ retailers)\n * Welcome package including Apple laptop or equivalent of your choice with headphone, mouse etc.\n * Working on exciting initiatives at the forefront of fintech\n * Bonus scheme\n * Pension scheme\n * Flexible working hours\n \n\n#Salary and compensation\n$90,000 — $140,000/year\n\n\n#Location\n🇪🇺 EU


See more jobs at easyMoney

# How do you apply?\n\nSend us your CV with a brief covering letter.
Apply for this job
or email to [email protected]
This job post is closed and the position is probably filled. Please do not apply.
About TripleLift\n\nTripleLift, one of the fastest-growing ad tech companies in the world, is rooted at the intersection of creative and media. Its mission is to make advertising better for everyone— content owners, advertisers and consumers—by reinventing ad placement one medium at a time. With direct inventory sources, diverse product lines, and creative designed for scale using our Computer Vision technology, TripleLift is driving the next generation of programmatic advertising from desktop to television.\n\nAs of January 2021, TripleLift has recorded five years of consecutive growth of greater than 70 percent. TripleLift is a Business Insider Hottest Ad Tech Company, Inc. Magazine 5000, Crain's New York Fast 50, Deloitte Technology Fast 500 and among Inc’s Best Workplaces. Find more information about how TripleLift is shaping the future of advertising at triplelift.com.\n\nThe Role\n\nTripleLift is seeking an experienced DevOps engineer to join our team full time. We are a fast-growing startup in the advertising technology sector, trying to tackle some of the most challenging problems facing the industry. As a DevOps engineer, you will be responsible for providing leverage to the engineering team to do the best possible work. This includes managing the infrastructure, working with them to improve their deployment and release process, as well as constantly searching for ways to improve our infrastructure.\n\nCore Technologies\n\nWe employ a wide variety of technologies here at TripleLift to accomplish our goals. From our early days, we’ve always believed in using the right tools for the right job, and continue to explore new technology options as we grow. The DevOps team uses the following technologies at TripleLift:\n\nTools: Chef, Ansible, Terraform, Docker, Kubernetes, CircleCI, Spinnaker, Prometheus, Grafana, Vault, Consul, Snowflake, Airflow, Databricks \nDatabases: AeroSpike, RDS MySQL, Redshift, MongoDB, and more\nLanguages: Java, Python, Node.js, TypeScript, Scala, and more\nAmazon Web Services and Google Cloud (GCP) to keep everything humming\nResponsibilities\n\nCollaborate with the rest of the engineering team to come up with best practices for writing and scaling good code;\nImprove our infrastructure and deployment processes;\nBuild tools that make every engineer more productive;\nWork with each team to optimize their application performance;\nDevelop a unified system for monitor, logging and error handling;\nSearch for industry best practices and use them to drive our team forward.\nWork with teams to optimize and reduce cloud costs;\nDesired Skills and Attributes\n\nSignificant experience in a DevOps or SRE role;\nUnderstanding of container technologies, like Docker and what it takes to containerize applications. \nLoves automation and automating repetitive work;\nUnderstands best practices of application, data, and cloud security;\nUnderstands best practices around building scalable, reliable, and highly available secure infrastructure;\nStrong understanding of cloud networking and network architecture, especially in the context of multi-region applications. \nSkilled in software provisioning, configuration management, and infrastructure automation tools;\nAbility to code well in at least one programming language;\nComfortable taking ownership of projects and showcasing key accomplishments;\nStrives for continued learning opportunities to build upon craft;\nExcellent organizational skills and attention to detail;\nAbility to work quickly and independently with minimal oversight;\nAbility to work under pressure and multitask in a fast-paced start-up environment;\nDesire to accept feedback and constructive criticism;\nExtremely strong and demonstrable work ethic;\nProven academic and/or professional achievement.\nEducation Requirement\n\nA Bachelor’s degree in a technical subject is preferred, although candidates with relevant experience who hold other degrees will be considered.\n\nExperience Requirement\n\nAt least five years of working experience in a professional, collaborative environment.\n\nLocation\n\nNew York or Kitchener-Waterloo preferred, but open to remote candidates\n\nBenefits and Company Perks\n\n100% Medical, Dental & Vision Plans\nUnlimited PTO\n401k, FSA, Commuter Benefits\nWeekly Yoga & Bootcamp\nMembership to Headspace (Meditation)\nOngoing professional development\nAmazing company culture\nNote: The Fair Labor Standards Act (FLSA) is a federal labor law of general and nationwide application, including Overtime, Minimum Wages, Child Labor Protections, and the Equal Pay Act. This role is an FLSA exempt role.\n\nAwards\n\nWe love celebrating our achievements. They remind us of our contributions making advertising work for everyone, and the TripleLifters who make it all possible. TripleLift is proud to be recognized by Inc. as a Best Workplace for our culture and benefits, and among Inc’s Best in Business for our innovations and positive impact on the industry. \n\nTo check out more of our awards and distinctions, please visit https://triplelift.com/ideas/#distinctions\n\nDiversity, Equity, Inclusion and Accessibility at TripleLift \n\nAt TripleLift, we believe in the power of diversity, equity, inclusion and accessibility. Our culture enables individuals to share their uniqueness and contribute as part of a team. With our DEIA initiatives, TripleLift is a place that works for you, and where you can feel a sense of belonging. At TripleLift, we will consider and champion all qualified applicants for employment without regard to race, creed, color, religion, national origin, sex, age, disability, sexual orientation, gender identity, gender expression, genetic predisposition, veteran, marital, or any other status protected by law. TripleLift is proud to be an equal opportunity employer.\n\nTripleLift does not accept unsolicited resumes from any type of recruitment search firm. Any resume submitted in the absence of a signed agreement will become the property of TripleLift and no fee shall be due. \n\n#Salary and compensation\n$120,000 — $200,000/year\n\n\n#Location\nUnited States, Eastern Standard Time Zone


See more jobs at Triplelift

# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply.
### About Loomly\nLoomly is the Brand Success Platform that empowers marketing teams to streamline collaboration. Think GitHub, for marketing teams: Loomly help marketers produce, stage, review, approve and publish content — as well as engage with their audience and measure their success. We are very customer-driven and strive to bring simplicity, efficiency and stellar support to our clients.\n\nLoomly is trusted by 11K+ teams around the world and consistently growing revenue at a 100%+ yearly rate. We're looking for a Lead DevOps Engineer/SRE to join us on this journey.\n\n### How we work\nWe are a small, fully remote company. We value our efficiency and effectiveness, leveraging autonomy, paying close attention to details and avoiding unnecessary bureaucracy. Our domain requires us to be flexible and adaptable. We take ownership of our work and truly appreciate valuable feedback.\n\nWe're driven and self-motivated to succeed. We value rest and time-off — we're not into 50-hour workweek grinds. We're in it for the long-haul and we're building the sustainable company that we all want to work for.\n\n### About the role\nLoomly is looking for someone to take ownership of and lead our DevOps/SRE efforts. This role will have a significant impact on the way we work as a team as well as Loomly's current and future growth.\n\nAs the Lead DevOps Engineer you'll work on a variety of projects across the stack — planning, architecting and building. You'll help to define engineering priorities and have a significant role in analyzing and setting the technologies we use at Loomly.\n\n### What you'll work on\n* Architect, deploy and manage critical infrastructure\n* Architect and scale background job systems\n* Utilize ECS to host applications and manage container orchestration (EC2 + Fargate)\n* Collaborate with other team members (DevOps, Full-Stack, Customer Support, etc.) to plan and complete projects\n* Tune, maintain and scale databases\n* Create and manage CI/CD pipelines\n* Provide visibility, insights and metrics via log and monitoring systems\n* Improve and design data/ETL pipelines\n* Build and improve developer workflows and tools\n\n### Must-Haves\n* In-depth knowledge of AWS and best-practices\n* Ample experience planning, architecting and deploying infrastructure from initial requirements using a detailed, thorough and organized approach\n* History of very explicit, clear and detailed written communication\n* In-depth knowledge of Docker and container orchestration\n* Based in USA or Canada\n### Nice-to-Haves\n* Prior experience running and maintaining Ruby applications\n* Architecting and scaling queueing and background job systems\n* Experience running, scaling and managing Postgres and/or Redis\n* Experience building and maintaining ETL pipelines\n* Current Tech Stack Highlights\n* Infrastructure: AWS, Docker, Elastic Container Service (ECS), Elastic Beanstalk, RDS, Terraform\n* Backend: Ruby on Rails, Sidekiq, Node.js, Postgres, Redis\n* Frontend: Turbolinks, React\n### Benefits and Perks\n* Annual salary range: $150K - $180K\n* Equity: 0.1% - 0.5%\n* Paid time off: 20 days (160 hours) earned throughout the year in pay periods\n* Health, vision and dental: group plan covering 99% of health insurance premiums (Gold Tier PPO Plan), and 75% of your dental and vision insurance premiums\n* 401(k): 100% match up to 4% of salary with immediate vesting\n* Flexible work hours and remote friendly environment\n* Quarterly company get-togethers \n\n#Salary and compensation\n$150,000 — $180,000/year\n\n\n#Location\nNorth America


See more jobs at Loomly

# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.

Clash App, Inc


closed
🌏 Worldwide
 
💰 $80k - $120k

aws

 

cd

 

ci

This job post is closed and the position is probably filled. Please do not apply.
Clash’s mission is simple: help creators make a living doing what they love. Founded by a former creator, we’re building a platform that provides fans with new ways to engage with their favorite creators and creators with meaningful monetization tools. Since our early launch in Summer 2020, we've seen explosive growth and recently acquired the Byte app. Now, with our combined 5 million users, we cannot wait to bring our vision of a more creator-centric world to reality. We love creators, love LA, and are looking to scale our team. Join us.\n\nOur creators and their fans deserve a scalable, secure, and constantly evolving backend. You'll help us deliver this by leading the design, configuration, and deployment of our company infrastructure, working closely with other technical folks on the backend team.\nWe need someone who can automate the boring things, scale the important things, and troubleshoot the broken things. You should have experience with deploying, monitoring, and scaling high-traffic web APIs serving mobile clients, and should be an advocate for protecting our creators' data and ensuring their success!\nWe offer flexible working hours, pay a decent market rate, and have a team of caring and motivated people ready to work with you on building a world-class platform for creators and their millions of fans.\n\n**Must haves:**\n\n* Extensive production experience with AWS (esp. RDS, ElastiCache, CloudFront, WAF)\n* Experience with Terraform and Terraform Cloud\n* Famliarity with security best practice (e.g OWASP, NIST)\n* Expertise with shell scripting and automation\n* Use of Docker in both dev and production\n* Mastery of YAML (yes, no, 01:02:03…) \n\n#Salary and compensation\n$80,000 — $120,000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at Clash App, Inc

# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.

Shopify

 This job is getting a pretty high amount of applications right now (12% of viewers clicked Apply)

verified closed
United States, Canada

site reliability

 

engineering manager

 

aws

 

distributed systems

This job post is closed and the position is probably filled. Please do not apply.
**Company Description**\n\nShopify is now permanently remote and working towards a future that is digital by default. Learn more about what this can mean for you.\n\nOver 1.7 million businesses have bet their success on the stability and performance of the Shopify platform. In order to support these growing businesses—as well as the next million—our systems need to be fast, reliable, scalable and secure. Accomplishing that will require people like you: talented, curious, growth-minded and empathetic engineering managers that are excited to build, support and lead our infrastructure teams.\n\n**Job Description**\n\nProduction Engineering, which is part of our core engineering organization, builds, operates and improves the heart of Shopify’s technical platform. We are a fast-growing team focused on building and maintaining tools and services to unlock the power of planet scale infrastructure for all of Shopify’s merchants, buyers and developers. \n\nShopify has grown rapidly over the last number of years. As an experienced infrastructure engineering manager, we need your help to both start new teams and expand and grow the missions of our existing teams. There are multiple positions available on a variety of teams and we will work with you as part of the interview process to identify which team best fits your interests, needs and experience.\n\n**Here is a sampling of some of the teams, systems and projects to which you could contribute:**\n\n* Expand the reach of our search systems to standardize the way we index documents in different languages and in various locations around the world\n* Scale a team looking at solving issues with shopping cart access, configuration plane information and package tracking data using a globally accessible, high write key/value store\n* Grow the capacity of our worldwide distributed site reliability engineering teams, consulting with other engineering groups on how to build low latency, highly resilient systems\n* Take our observability systems to the next level, expanding and evangelizing the usage of tracing, metrics and structured logging across the company \n* Work on expanding our highly scalable and configurable job system to support all of the applications on the platform\n* Keep our databases operating optimally using proxies, load shedding, custom routing layers and application transparent sharding\n* Build manipulation primitives such as combination and filtering into our streaming infrastructure to allow teams to translate existing data streams into specific business problems\n\n**Qualifications**\n\nWhile we don’t need you to have specific experience with our technology stack, these are leadership positions that do require that you have: \n\n* Proven management and leadership skills, allowing you to develop and mentor others as well as build credibility with your team while executing broader engineering strategies\n* Demonstrated proficiency designing and improving the development, delivery and automation of software infrastructure within a cloud environment\n* Experience developing and designing solutions in a modern, high-level/systems programming language (Go, Ruby, Python, Java, C++, C, etc…)\n* Familiarity working with senior stakeholders across the organization, both technical and non technical, to develop roadmaps, integrate with larger company initiatives and deliver business and engineering value.\n\n**If you have experience in any of the following areas, that will certainly be put to good use. But if you don’t, that’s ok -- the faster you apply, the quicker we can get to teaching you about:**\n\n* Building services and deploying them on top of Kubernetes and/or Google Cloud Platform\n* Familiarity with how to design, build, understand and maintain distributed systems \n* Working with Terraform and/or other infrastructure orchestration tooling\n* Participating in an on call rotation and/or site reliability engineering (SRE) experience\n* Automating infrastructure operations\n\n**Additional information**\n\nWe know that applying to a new role takes a lot of work and we truly value your time. We’re looking forward to reading your application.\n\nAt Shopify, we are committed to building and fostering an environment where our employees feel included, valued, and heard. Our belief is that a strong commitment to diversity and inclusion enables us to truly make commerce better for everyone. We strongly encourage applications from Indigenous peoples, racialized people, people with disabilities, people from gender and sexually diverse communities and/or people with intersectional identities.\n\n#Location\nUnited States, Canada


See more jobs at Shopify

# How do you apply?\n\nThis job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Kontist

 This job is getting a pretty high amount of applications right now (10% of viewers clicked Apply)

verified closed
🇪🇺 EU

aws

 

infrastructure

 

cd

 

ci

This job post is closed and the position is probably filled. Please do not apply.
### About the job\n* Take full ownership of our cloud infrastructure on AWS \n* Continuously improve the reliability, stability, and performance of the infrastructure\n* Ensure infrastructure security and perform routine security audit \n* Build out robust monitoring and alerting systems\n\n\n### Your profile\n* Agile experience and mindset \n* Confident, assertive and communicative \n* Intimate knowledge of AWS ecosystem\n* Experience in managing infrastructure-as-code\n* Configuration management, CD/CI \n* CloudFormation and/or Terraform \n* Virtualization technology: docker, kubernetes \n* Strong programming skills in Shell, Python, or Ruby \n* Proficiency in English\n\n\n### What’s in it for you\n* Open to both full-time position as well as 100% remote\n* A highly motivated and ambitious working environment in a cohesive, fast-growing team \n* A multicultural, diverse, and inclusive community where you can grow personally and professionally, including possibilities to move internally within the company \n* Lovely sunny and green office in central Berlin with office dogs \n* Flexible, trust-based working hours \n* Personal coaching \n* Regular team events and company off-sites \n* Weekly German and English classes \n* Sponsored daily on-site lunches \n* Urban Sports Club Membership\n\n\n### About Kontist\nKontist is a Berlin, Germany-based financial services provider for freelancers with about 100 employees. We just announced the completion of a €25 million ($29.6M) Series B funding round in March 2021.\n\n\n#### *Please do not apply if you are not able to work during our core working hours (10:00 - 16:00 CEST). Occasional visits to Berlin (Germany) might be required if you choose to work remotely.*\n\n\n#Location\n🇪🇺 EU


See more jobs at Kontist

# How do you apply?\n\nThis job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
Doximity is transforming the healthcare industry. Our mission is to help doctors be more productive, informed, and connected. As a software engineer, you'll work within cross-functional delivery teams alongside other engineers, designers, and product managers in building software to help improve healthcare.  \n\nOur team brings a diverse set of technical and cultural backgrounds and we like to think pragmatically in choosing the tools most appropriate for the job at hand.\n\nOne of Doximity's core values is stretching ourselves. Even if you don't check off all the boxes below we encourage you to apply. Doximity is full of exceptional people that don't fit a mold, join us!\n\n**About you**\n\n* You’re a software engineers with years of experience and a deep understanding of software engineering practices.\n * You have a deep understanding of container technologies such as Docker and Kubernetes. Bonus points if you have operated containers in production.\n* You’re proficient in Golang. Bonus points if you have written container based tooling in Golang.\n* You have experience working with Terraform and Chef (or similar tooling).\n* You are proficient with Unix, AWS, and Git.\n* You are self-motivated and able to manage yourself and your own queue.\n* You are a problem solver with a passion for simple, clean, and maintainable solutions.\n* You agree that concise and effective written and verbal communication is a must for a successful team.\n* You are able to maintain a minimum of 5 hours overlap with 9:30 to 5:30 PM Pacific time.\n\n**Here's How You Will Make an Impact**\n\n* Help build a container-based self-service infrastructure for product engineering teams.\n* Work side-by-side with the rest of devops and infrastructure team to empower other engineering teams.\n* Design and implement secure and easy-to-use tooling and abstractions for other teams to leverage.\n* Active involvement in design, implementation, and maintenance of the development, staging, and production infrastructure.\n* Participate in an on-call rotation for the services owned by your team.\n* Help ensure the stability and uptime of services within the organization.\n* Create concise post-mortems in the event of an outage.\n* Write and maintain run-books for other engineers to leverage.\n* Ensure proper security, monitoring, alerting, and reporting.\n\n**About Us**\n\n* Here are some of the [ways we bring value to doctors](https://drive.google.com/file/d/1qimYh0mG3i1nTJe6jDCDepJt2i4o8MEB/view)\n* Our web applications are built primarily using Ruby, Rails, Javascript (Vue.js), and Golang\n* Our data engineering stack run on Python, MySQL, Spark, and Airflow\n* Our production application stack is hosted on AWS and we deploy to production on average 50 times per day\n* We have over 350 private repositories in Github containing our applications, forks of gems, our own internal gems, and open-source projects\n* We have worked as a distributed team for a long time; we're currently about 65% distributed\n* Find out more information on the [Doximity engineering blog](https://technology.doximity.com/)\n* Our [company core values](https://work.doximity.com/)\n* Our [recruiting process](https://technology.doximity.com/articles/engineering-recruitment-process-doximity)\n* Our [product development cycle](https://technology.doximity.com/articles/mofo-driven-product-development)\n* Our [on-boarding & mentorship process](https://technology.doximity.com/articles/software-engineering-on-boarding-at-doximity)\n\n**Benefits & Perks**\n\n* Generous time off policy\n* Comprehensive benefits including medical, vision, dental, Life/ADD, 401k, flex spending accounts, commuter benefits, equipment budget, and continuous education budget\n* Pre-IPO stock incentives\n* and much more! For a full list, see our career page\n\n**More about Doximity**\n\nWe’re thrilled to be named the Fastest Growing Company in the Bay Area, and one of Fast Company’s Most Innovative Companies. Joining Doximity means being part of an incredibly talented and humble team. We work on amazing products that over 70% of US doctors (and over one million healthcare professionals) use to make their busy lives a little easier. We’re driven by the goal of improving inefficiencies in our $3.5 trillion U.S. healthcare system and love creating technology that has a real, meaningful impact on people’s lives. To learn more about our team, culture, and users, check out our careers page, company blog, and engineering blog. We’re growing fast, and there’s plenty of opportunities for you to make an impact—join us!\n\n*Doximity is proud to be an equal opportunity employer, and committed to providing employment opportunities regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, pregnancy, childbirth and breastfeeding, age, sexual orientation, military or veteran status, or any other protected classification. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.*\n\n\n\n\n#Location\n🇺🇸 US


See more jobs at Doximity

# How do you apply?\n\nThis job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
Doximity is transforming the healthcare industry. Our mission is to help doctors be more productive, informed, and connected. Achieving this vision requires a multitude of disciplines, expertises and perspective. One of our core pillars have always been data. As a software engineer focused on the infrastructure aspect of our data stack you will work on improving healthcare by advancing our data capabilities, best practices and systems. Our team brings a diverse set of technical and cultural backgrounds and we like to think pragmatically in choosing the tools most appropriate for the job at hand.\n\n**About Us**\n\nOur data teams schedule over 1000 Python pipelines and over 350 Spark pipelines every 24 hours, resulting in over 5000 data processing tasks each day. Additionally, our data endeavours leverage datasets ranging in size from a few hundred rows to a few hundred billion rows. The Doximity data teams rely heavily on Python3, Airflow, Spark, MySQL, and Snowflake. To support this large undertaking, the data infrastructure team uses AWS, Terraform, and Docker to manage a high-performing and horizontally scalable data stack. The data infrastructure team is responsible for enabling and empowering the data analysts, machine learning engineers and data engineers at Doximity. We provide and evole a foundation on which to build, and ensure that incidental complexites melt into our abstractions. Doximity has worked as a distributed team for a long time; pre-pandemic, Doximity was already about 65% distributed.\n\nFind out more information on the Doximity engineering blog\n* Our [company core values](https://work.doximity.com/)\n* Our [recruiting process](https://technology.doximity.com/articles/engineering-recruitment-process-doximity)\n* Our [product development cycle](https://technology.doximity.com/articles/mofo-driven-product-development)\n* Our [on-boarding & mentorship process](https://technology.doximity.com/articles/software-engineering-on-boarding-at-doximity)\n\n**Here's How You Will Make an Impact**\n\nAs a data infrastructure engineer you will work with the rest of the data infrastructure team to design, architect, implement, and support data infrastructure, systems, and processes impacting all other data teams at Doximity. You will solidify our CI/CD pipelines, reduce production impacting issues and improve monitoring and logging. You will support and train data analysts, machine learning engineers, and data engineers on new or improved data infrastructure systems and processes. A key responsibility is to encourage data best-practices through code by continuing the development of our internal data frameworks and libraries. Also, it is your responsibility to identify and address performance, scaling, or resource issues before they impact our product. You will spearhead, plan, and carry out the implementation of solutions while self-managing your time and focus.\n\n**About you**\n\n* You have professional data engineering or operations experience with a focus on data infrastructure\n* You are fluent in Python and SQL, and feel at home in a remote Linux server session\n* You have operational experience supporting data stacks through tools like Terraform, Docker, and continuous integration through tools like CircleCI\n* You are foremost an engineer, making you passionate about high code quality, automated testing, and engineering best practices\n* You have the ability to self-manage, prioritize, and deliver functional solutions\n* You possess advanced knowledge of Linux, Git, and AWS (EMR, IAM, VPC, ECS, S3, RDS Aurora, Route53) in a multi-account environment\n* You agree that concise and effective written and verbal communication is a must for a successful team\n\n**Benefits & Perks**\n\n* Generous time off policy\n* Comprehensive benefits including medical, vision, dental, generous paternity and maternity leave, Life/ADD, 401k, flex spending accounts, commuter benefits, equipment budget, and continuous education budget\n* Pre-IPO stock incentives\n* and much more! For a full list, see our career page\n\n**More info on Doximity**\n\nWe're thrilled to be named the Fastest Growing Company in the Bay Area, and one of Fast Company's Most Innovative Companies. Joining Doximity means being part of an incredibly talented and humble team. We work on amazing products that over 70% of US doctors (and over one million healthcare professionals) use to make their busy lives a little easier. We're driven by the goal of improving inefficiencies in our $3.5 trillion U.S. healthcare system and love creating technology that has a real, meaningful impact on people's lives. To learn more about our team, culture, and users, check out our careers page, company blog, and engineering blog. We're growing steadily, and there's plenty of opportunities for you to make an impact.\n\n*Doximity is proud to be an equal opportunity employer and committed to providing employment opportunities regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, pregnancy, childbirth and breastfeeding, age, sexual orientation, military or veteran status, or any other protected classification. We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local law.*\n\n\n\n\n#Location\n🇺🇸 US


See more jobs at Doximity

# How do you apply?\n\nThis job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\n→ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) ← (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected])\n\n#Location\n🌏 Worldwide


See more jobs at Splitgraph

# How do you apply?\n\nThis job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

InReach Ventures

 This job is getting a pretty high amount of applications right now (11% of viewers clicked Apply)

verified closed
UK or Italy
 
💰 $55k - $70k

java

 

python

 

aws

 
This job post is closed and the position is probably filled. Please do not apply.
InReach is changing how VC in Europe works, for good. Through data, software and Machine Learning, we are building an in-house platform to help us find, reach-out to and invest in early-stage European startups, regardless of the city or country they’re based in.\n\nWe are looking for a back-end developer to continue the development of InReach’s data services. This involves: \n* Cleaning / wrangling / merging / processing the data on companies and founders from across Europe\n* Building data pipelines with the Machine Learning engineers\n* Building APIs to support front-end investment product used by the Investment team (named DIG)\n\nThis role will involve working across the stack. From DevOps (Terraform) to web scraping and Machine Learning (Python) all the way to data pipelines and web-services (Java) and getting stuck into the front-end (Javascript). It’s a great opportunity to hone your skills and master some new ones.\n\nIt is important to us that candidates be passionate about helping entrepreneurs and startups. This is our bread-and-butter and we want you to be involved.\n\nInReach is a remote-first employer and we are looking to this hire to help us become an exceptional place to work for remote employees. Whether you are in the office or remote, we are looking for people with excellent written and verbal communication skills.\n\n### Background Reading:\n* [InReach Ventures, the 'AI-powered' European VC, closes new €53M fund](https://techcrunch.com/2019/02/11/inreach-ventures-the-ai-powered-european-vc-closes-new-e53m-fund/?guccounter=1)\n* [The Full-Stack Venture Capital](https://medium.com/entrepreneurship-at-work/the-full-stack-venture-capital-8a5cffe4d71)\n* [Roberto Bonanzinga starts InReach Ventures with DIG platform](https://www.businessinsider.com/roberto-bonanzinga-starts-inreach-ventures-with-dig-platform-2015-11?r=US&IR=T)\n* [Exceptional Communication Guidelines](https://www.craft.do/s/Isrjt4KaHMPQ)\n\n## Responsibilities\n\n* Creatively and quickly coming up with effective solutions to undefined problems\n* Choosing technology that is modern but not hype-driven\n* Developing features and tests quickly with good, clean code\n* Being part of the wider development team, reviewing code and participating in architecture from across the stack\n* Communicating exceptionally, both asynchronously (written) and synchronously (spoken)\n* Helping to shape InReach as a remote-first organization\n\n## Technologies\n\nGiven that this position touches so much of the stack, it will be difficult for a candidate that only has experience in Python or only in Java to be successful in being effective quickly. While we expect the candidate to be stronger in one or the other, some professional exposure is required.\n\nIn addition to the programming skills and the ability to write well designed and tested code, infrastructure within modern cloud platforms and sound architectural reasoning are expected.\n\nNone of these are a prerequisite, but help:\n* Functional Programming\n* Reactive Streams (RxJava2)\n* Terraform\n* Postgres\n* ElasticSearch\n* SQS\n* Dynamodb\n* AWS Lambda\n* Docker\n* Dropwizard\n* Maven\n* Pipenv\n* Javascript\n* React\n* NodeJS\n\n## Interview Process\n* 15m video chat with Ben, CTO to find out more about InReach and the role\n* 2h data pipeline technical test (Python)\n* 2h web service technical test (Java)\n* 30m architectural discussion with Ben, talking through the work you did\n* 2h interview with the different team members from across InReach. We’re a small company so it’s important we see how we’ll all work together - not just the tech team!\n \n\n#Salary and compensation\n$55,000 — $70,000/year\n\n\n#Location\nUK or Italy


See more jobs at InReach Ventures

# How do you apply?\n\nThis job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

IN4IT


verified closed
🇪🇺 Eu-Only

cloud

 

aws

 
This job post is closed and the position is probably filled. Please do not apply.
**IN4IT** is growing and looking for a cloud engineer with a strong focus on AWS. The ideal candidate is a fast learner, self starter and someone who isn’t afraid to take initiative. \n\nYour profile:\n* You have **excellent communication skills** as this is a remote position\n* You have **expert skills** in systems administration, configuration management and automation\n* You know the AWS well-architected framework\n* You have **proven experience** in architecting and implementing scalable infrastructure strategies\n* You bring **expert skills** in cloud operations and architecture, especially Amazon Web Services\n* You have **expert skills** in IAM policies, IAM cross-account setups, and IAM advanced features like Permission Boundaries\n* You embrace DevOps principles and best practices\n* You already have **one or more** AWS certificates and are willing to do more\n\nWe offer:\n* 10 to 20 percent of working time can be used to learn for AWS certification, contribute to our open source projects or learn more about a relevant technology\n* Work fully remote\n* Flexible working hours(we work with international clients)\n\nTooling/languages we use:\n* Terraform\n* Docker\n* Golang\n* Slack\n* AWS EKS (Kubernetes) / AWS ECS\n* . . .\n\nWork location:\n*Within European Union. **Only EU residents** are eligible for this role due to regulatory requirements of our clients.*\n\n*Please include your **resume** upon applying.*\n\n**We are proud that IN4IT is an equal opportunity employer. We celebrate diversity and are committed to create an inclusive environment for all employees.**\n \n\n#Salary and compensation\nBased on experience/year\n\n\n#Location\n🇪🇺 Eu-Only


See more jobs at IN4IT

# How do you apply?\n\nThis job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

IN4IT


verified closed
Europe (Eu-Citizens)

devops

 

cloud

 

sre

 
This job post is closed and the position is probably filled. Please do not apply.
**Cloud & DevOps engineer (Europe-only applicants)**\n\nIN4IT is growing and looking for a Cloud & DevOps engineer who wants to work on super exciting DevOps and Cloud projects using cutting edge technology. The ideal candidate is a fast learner, self starter and someone who isn’t afraid to take initiative.\n\n**You’ll be using the following technologies:**\n* Containers: Docker, AWS ECS, Kubernetes\n* Cloud automation: Terraform\n* Cloud providers: mainly AWS (S3, EC2, RDS, ECS, IAM, etc)\n* CI/CD: Bitbucket pipelines, Gitlab, GitHub, AWS CloudPipelines/CloudDeploy/CloudBuild\n* OS: Alpine / Ubuntu / Amazon Linux\n* idP: SAML, OIDC with providers like Onelogin\n\n**You’ll be automating using the following programming languages:**\n* Bash\n* Golang (you'll need to be interested in learning it)\n\n**We offer:**\n* Opportunity to learn to work with these cutting edge technologies\n* Work remotely or on-site\n* Flexible working hours (we work with international clients)\n\n**Work location:**\n\nWe are a remote first company so location is not that important to us (within Europe). We do have an office in Herent, Belgium if you want to work from the office. \n\nIf you are excited by this opportunity, we would love to hear from you!\n\nWe are proud that IN4IT is an equal opportunity employer. We celebrate diversity and are committed to create an inclusive environment for all employees.\n\n#Location\nEurope (Eu-Citizens)


See more jobs at IN4IT

# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.

Cycloid


verified closed
🇪🇺 EU

cloud

 

devops

 

aws

 

concourse

This job post is closed and the position is probably filled. Please do not apply.
Our vision: we empower people and customers. \n\nOur purpose: we want to simplify DevOps and Cloud adoption, empower people (dev, ops, project manager, CTO) to focus on what matters through our DevOps platform & services.\n\nAbout you: \nYou are a passionate DevOps, which is the most important, fun, we like fun people, and with an expertise in Operations with troubleshooting. \n\nYou like various technologies such as Kubernetes, Docker, Pacer, Ansible, Terraform, Prometheus.\n\nOn the Ops side, you have a successful experience in a Cloud provider, certifications are preferred.\n\nYou like to design architecture, collaborate with developers to fix problems and provide long-term solutions. Providing service and automation on Cloud is a pleasure.\n\nYou like to identify problems, are capable of commenting in a constructive, positive & open way to the team. Mindset is what is the most important.\n\nOn the Dev / R&D side, you want to be a change-maker by developing products, our DevOps platform, that will facilitate DevOps and Cloud adoption for our customers.\n\nThe main difference between a SysOps and a DevOps is that you have an experience in development. \n\nIn development, an experience in Development: C / C++ is nice, Golang would be better, or any another concrete experience/language. \n\nYou will work 50% on integration / service & 50% development or R&D to improve both your skills and break the monotony as we also don't dedicate people to a customer. \n\nYou can decide to work on-site in Paris or on remote whenever you are located but all the organization is in an asynchronous manner, keeping a good team culture through Slack and conference meetings. We have people in UK, Spain, Germany, Israël and Sueden..\n\nYou have adequate written and oral communication skills in English, any others languages would be a plus.\n\n#Location\n🇪🇺 EU


See more jobs at Cycloid

# How do you apply?\n\nThis job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
102ms