📈 Open Startup
RSS
API
Post a Job

get a remote job
you can do anywhere

3 Remote Docker Git Jobs at companies like Help Scout, Doximity and last posted 3 months ago

3 Remote Docker Git Jobs at companies like Help Scout, Doximity and last posted 3 months ago

Get a  email of all new remote Docker + Git jobs

Subscribe
×

  Jobs

  People

👉 Hiring for a remote Docker + Git position?

Post a Job - $299
on the 🏆 #1 remote jobs board

This year


Help Scout

Americas verified

Ops Engineer  


Help Scout

Americas verified

aws

linux

git

chef

aws

linux

git

chef

Americas3mo

Apply


Stats (beta): 👁 92 views,✍️ 37 applied (40%)
As a member of our Ops team, you will be at the heart of nearly every application, tool, and service at Help Scout. The work you do everyday will reflect the team mission: Ensure uptime and security across all of our applications while developing and supporting tools to enable customer bliss. While the mission might be straightforward, anyone who has delivered high availability services and developer tooling at scale, knows this is no simple task. {linebreak}{linebreak}To help us with our mission, we are seeking an experienced Ops Engineer to join our team. You will have a direct impact on Help Scout’s success, while helping more than 8,000 businesses around the world. While customers [love our product](https://www.helpscout.net/customers/), it means nothing if they can't access our services with great performance.{linebreak}{linebreak}# Responsibilities{linebreak} **Technologies we work with**{linebreak}{linebreak}* AWS, Linux (Ubuntu/CentOS), Chef, Git/Github, RabbitMQ, AWS Aurora MySQL & PostgreSQL, MongoDB, Redis, Jenkins, Docker/Compose, New Relic, Sensu, PagerDuty, Ruby, Go, Python, Java, and PHP.{linebreak}{linebreak}**About the role**{linebreak}{linebreak}* You’ll be working on a small team of six (that includes one of our co-founders) and in collaboration with our software developers to build, deploy, secure, manage, and optimize highly-available, fault-tolerant, and horizontally scalable systems in AWS.{linebreak}* Ideally, we are looking to add more coverage to the(UTC-5) timezone, but we are open to candidate in UTC-6,-7, or-8 if you are willing to time shift to accommodate the preferred timezone.{linebreak}* Our engineering teams communicate mostly via Slack and are committed to [remote, agile development](https://www.helpscout.net/blog/agile-remote-teams/). When your code is ready, you’ll create and send a pull request with test cases and tag your team for review. {linebreak}* We are investing heavily in continuous integration and delivery and strive to uphold immutable infrastructure standards. {linebreak}* You’ll work autonomously for the most part and we trust you to get work done when/where you can be productive.{linebreak}* In order to ensure excellent service to our customers, you will be part of our rotating on-call team.{linebreak}{linebreak}**A note about on-call**{linebreak}{linebreak}* The 5-week rotation follows this format: 1 week on backup on-call(which rarely sees much action), 1 week of being on-call, followed by a 3 week hiatus from on-call.{linebreak}* Our on-call shift is not particularly wearisome, but as a thank you for carrying the weight for the week, the day following your shift is a free day off if you want to take it. We want you happy, healthy and well-rested! {linebreak}{linebreak}# Requirements{linebreak}* You have a deep understanding of what it takes to run SaaS at scale and have a solid understanding of Linux systems and networking; from kernel to shell, system libraries, file systems and client-server protocols.{linebreak}* You have a growth mindset, a passion for learning, and are willing to lean into discomfort for the good of our customers and product. {linebreak}* You are proficient and comfortable in the AWS ecosystem.{linebreak}* Security engineering is near and dear to your heart; you build with and advocate for a security mindset when implementing new features and infrastructure.{linebreak}* You are adept at automating service and infrastructure configuration via industry standard tools(E.g. Chef, Terraform).{linebreak}* You have experience building continuous deployment and testing tools. Bonus points if you’ve built and managed a containerized production deployment environment at scale. {linebreak}* You have experience working with MTAs(e.g exim, postfix) and SPAM filtering(e.g. rspamd, SpamAssassin){linebreak}* You have experience building continuous deployment and testing tools. Bonus points if you’ve built and managed a containerized production deployment environment at scale. {linebreak}* You became an engineer because you like building systems, tools or products that help people.{linebreak}* You design and build systems that work well and fail gracefully.{linebreak}* You write code and scripts that other engineers can easily read and understand and you welcome reviews and feedback from your peers. You are comfortable writing tests and you thoroughly verify your work before you deploy. {linebreak}* You’re a great communicator and have an excellent command of written and spoken English.As a remote company, we rely on clear communication for collaboration and execution. {linebreak}* You believe remote teams are the future of work, or are at least excited about the idea. You have experience working with remote teams or can adjust your work and time-management style to be remote-friendly.{linebreak}* You are helpful and empathetic and care about building on our company culture that embraces these qualities. {linebreak}{linebreak}#Location{linebreak}- Americas

See more jobs at Help Scout

# How do you apply? Please fill out an application on our website to start the process!
Apply for this Job

👉 Please reference you found the job on Remote OK as thank you to us, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.


Doximity

verified

Data Engineer, Infrastructure


Doximity

verified

elasticsearch

git

python

engineer

elasticsearch

git

python

engineer

1yr

Apply


Stats (beta): 👁 199 views,✍️ 0 applied (0%)
Why work at Doximity?{linebreak}{linebreak}Doximity is the leading social network for healthcare professionals with over 70% of U.S. doctors as members. We have strong revenues, real market traction, and we're putting a dent in the inefficiencies of our $2.5 trillion U.S. healthcare system. After the iPhone, Doximity is the fastest adopted product by doctors of all time. Our founder, Jeff Tangney, is the founder & former President and COO of Epocrates (IPO in 2010), and Nate Gross is the founder of digital health accelerator RockHealth. Our investors include top venture capital firms who've invested in Box, Salesforce, Skype, SpaceX, Tesla Motors, Twitter, Tumblr, Mulesoft, and Yammer. Our beautiful offices are located in SoMa San Francisco.{linebreak}{linebreak}You will join a small team of data infrastructure engineers (4) to build and maintain all aspects of our data pipelines, ETL processes, data warehousing, ingestion and overall data infrastructure. We have one of the richest healthcare datasets in the world, and we're not afraid to invest in all things data to enhance our ability to extract insight.{linebreak}{linebreak}Job Summary{linebreak}{linebreak}-Help establish robust solutions for consolidating data from a variety of data sources.{linebreak}-Establish data architecture processes and practices that can be scheduled, automated, replicated and serve as standards for other teams to leverage. {linebreak}-Collaborate extensively with the DevOps team to establish best practices around server provisioning, deployment, maintenance, and instrumentation.{linebreak}-Build and maintain efficient data integration, matching, and ingestion pipelines.{linebreak}-Build instrumentation, alerting and error-recovery system for the entire data infrastructure.{linebreak}-Spearhead, plan and carry out the implementation of solutions while self-managing.{linebreak}-Collaborate with product managers and data scientists to architect pipelines to support delivery of recommendations and insights from machine learning models.{linebreak}{linebreak}Required Experience & Skills{linebreak}{linebreak}-Fluency in Python, SQL mastery.{linebreak}-Ability to write efficient, resilient, and evolvable ETL pipelines. {linebreak}-Experience with data modeling, entity-relationship modeling, normalization, and dimensional modeling.{linebreak}-Experience building data pipelines with Spark and Kafka.{linebreak}-Comprehensive experience with Unix, Git, and AWS tooling.{linebreak}-Astute ability to self-manage, prioritize, and deliver functional solutions.{linebreak}{linebreak}Preferred Experience & Skills{linebreak}{linebreak}-Experience with MySQL replication, binary logs, and log shipping.{linebreak}-Experience with additional technologies such as Hive, EMR, Presto or similar technologies.{linebreak}-Experience with MPP databases such as Redshift and working with both normalized and denormalized data models.{linebreak}-Knowledge of data design principles and experience using ETL frameworks such as Sqoop or equivalent. {linebreak}-Experience designing, implementing and scheduling data pipelines on workflow tools like Airflow, or equivalent.{linebreak}-Experience working with Docker, PyCharm, Neo4j, Elasticsearch, or equivalent. {linebreak}{linebreak}Our Data Stack{linebreak}{linebreak}-Python, Kafka, Spark, MySQL, Redshift, Presto, Airflow, Neo4j, Elasticsearch{linebreak}{linebreak}Fun Facts About the Team{linebreak}{linebreak}-We have one of the richest healthcare datasets in the world.{linebreak}-Business decisions at Doximity are driven by our data, analyses, and insights.{linebreak}-Hundreds of thousands of healthcare professionals will utilize the products you build.{linebreak}-Our R&D team makes up about half the company, and the product is led by the R&D team. {linebreak}-Our Data Science team is comprised of about 20 people.

See more jobs at Doximity

Visit Doximity's website

# How do you apply? Use Apply Button
Apply for this Job

👉 Please reference you found the job on Remote OK as thank you to us, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.


Doximity

verified

Machine Learning Engineer  


Doximity

verified

git

python

machine learning

data science

git

python

machine learning

data science

1yr

Apply


Stats (beta): 👁 199 views,✍️ 100 applied (50%)
Why work at Doximity?{linebreak}{linebreak}Doximity is the leading social network for healthcare professionals with over 70% of U.S. doctors as members. We have strong revenues, real market traction, and we're putting a dent in the inefficiencies of our $2.5 trillion U.S. healthcare system. After the iPhone, Doximity is the fastest adopted product by doctors of all time. Our founder, Jeff Tangney, is the founder & former President and COO of Epocrates (IPO in 2010), and Nate Gross is the founder of digital health accelerator RockHealth. Our investors include top venture capital firms who've invested in Box, Salesforce, Skype, SpaceX, Tesla Motors, Twitter, Tumblr, Mulesoft, and Yammer. Our beautiful offices are located in SoMa San Francisco.{linebreak}{linebreak}Skills & Requirements{linebreak}{linebreak}-3+ years of industry experience; M.S. in Computer Science or other relevant technical field preferred.{linebreak}-3+ years experience collaborating with data science and data engineering teams to build and productionize machine learning pipelines.{linebreak}-Fluent in SQL and Python; experience using Spark (pyspark) and working with both relational and non-relational databases.{linebreak}-Demonstrated industry success in building and deploying machine learning pipelines, as well as feature engineering from semi-structured data.{linebreak}-Solid understanding of the foundational concepts of machine learning and artificial intelligence.{linebreak}-A desire to grow as an engineer through collaboration with a diverse team, code reviews, and learning new languages/technologies.{linebreak}-2+ years of experience using version control, especially Git.{linebreak}-Familiarity with Linux, AWS, Redshift.{linebreak}-Deep learning experience preferred.{linebreak}-Work experience with REST APIs, deploying microservices, and Docker is a plus.{linebreak}{linebreak}What you can expect{linebreak}{linebreak}-Employ appropriate methods to develop performant machine learning models at scale, owning them from inception to business impact.{linebreak}-Plan, engineer, and deploy both batch-processed and real-time data science solutions to increase user engagement with Doximity’s products.{linebreak}-Collaborate cross-functionally with data engineers and software engineers to architect and implement infrastructure in support of Doximity’s data science platform.{linebreak}-Improve the accuracy, runtime, scalability and reliability of machine intelligence systems{linebreak}-Think creatively and outside of the box. The ability to formulate, implement, and test your ideas quickly is crucial.{linebreak}{linebreak}Technical Stack{linebreak}{linebreak}-We historically favor Python and MySQL (SQLAlchemy), but leverage other tools when appropriate for the job at hand.{linebreak}-Machine learning (linear/logistic regression, ensemble models, boosted models, deep learning models, clustering, NLP, text categorization, user modeling, collaborative filtering, topic modeling, etc) via industry-standard packages (sklearn, Keras, NLTK, Spark ML/MLlib, GraphX/GraphFrames, NetworkX, gensim).{linebreak}-A dedicated cluster is maintained to run Apache Spark for computationally intensive tasks.{linebreak}-Storage solutions: Percona, Redshift, S3, HDFS, Hive, Neo4j, and Elasticsearch.{linebreak}-Computational resources: EC2, Spark.{linebreak}-Workflow management: Airflow.{linebreak}{linebreak}Fun facts about the Data Science team{linebreak}{linebreak}-We have one of the richest healthcare datasets in the world.{linebreak}-We build code that addresses user needs, solves business problems, and streamlines internal processes.{linebreak}-The members of our team bring a diverse set of technical and cultural backgrounds.{linebreak}-Business decisions at Doximity are driven by our data, analyses, and insights.{linebreak}-Hundreds of thousands of healthcare professionals will utilize the products you build.{linebreak}-A couple times a year we run a co-op where you can pick a few people you'd like to work with and drive a specific company goal.{linebreak}-We like to have fun - company outings, team lunches, and happy hours!

See more jobs at Doximity

Visit Doximity's website

# How do you apply? Use Apply button
Apply for this Job

👉 Please reference you found the job on Remote OK as thank you to us, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.