FeedbackIf you find a bug, or have feedback, put it here. Please no job applications in here, click Apply on the job instead.Thanks for the message! We will get back to you soon.

[Spam check] What is the name of Elon Musk's company going to Mars?

Send feedback
Open Startup
Health InsurancePost a job

find a remote job
work from anywhere

πŸ‘‰ Hiring for a Remote position?

Post a job
on the πŸ† #1 Remote Jobs board

Remote Health by SafetyWing

Global health insurance for freelancers & remote workers








This job post is closed and the position is probably filled. Please do not apply.
\nThe Opportunity\n\nSecurityScorecard is hiring an Ops Engineer to bridge the gap between our global development and operational teams who is motivated to help continue automating and scaling our infrastructure. The Ops Engineer will be responsible for setting up and managing the operation of project development and test environments as well as the software configuration management processes for the entire application development lifecycle. Your role would be to ensure the optimal availability, latency, scalability, and performance of our product platforms. You would also be responsible for automating production operations, promptly notifying backend engineers of platform issues, and checking long term quality metrics.\n\nOur infrastructure is based on AWS with a mix of managed services like RDS, ElastiCache, and SQS, as well as hundreds of EC2 instances managed with Ansible and Terraform. We are actively using three AWS regions, and have equipment in several data centers across the world.\n\n\nResponsibilities\n\n\n* Training, mentoring, and lending expertise to coworkers with regards to operational and security best practises. \n\n* Reviewing and providing feedback on GitHub Pull Requests to team members AND development teams- a significant percentage of our Software Engineers have written Terraform.\n\n* Identifying opportunities for technical and process improvement and owning the implementation. \n\n* Championing the concepts of immutable containers, Infrastructure as Code, stateless applications, and software observability throughout the organization.\n\n* Systems performance tuning with a focus on high availability and scalability.\n\n* Building tools to ease the usability and automation of processes\n\n* Keeping products up and operating at full capacity\n\n* Assisting with migration processes as well as backup and replication mechanisms\n\n* Working on a large-scale distributed environment where you were focused on scalability/reliability/performance\n\n* Ensuring proper monitoring / alerting are configured\n\n* Investigating incidents and performance lapses\n\n\n\n\nCome help us with projects such as…\n\n\n* Extending our compute clusters to support low latency, on-demand job execution\n\n* Turning pets into cattle\n\n* Cross region replication of systems and corresponding data to support low latency access\n\n* Rolling out application performance monitoring to existing services, extending integrations where required\n\n* Migration from self hosted ELK to a SaaS stack\n\n* Continuous improvement of CI/CD processes making builds & deployments faster, safer, and more consistent\n\n* Extending a Global VPN WAN to a datacenter with IPSec+BGP\n\n\n\n\n\nRequirements\n\n\n* 3+ years of DevOps and/or Operations experience\n\n* 1+ years of production environment experience with Amazon Web Services (AWS)\n\n* 1+ years using SQL databases (MySQL, Oracle, Postgres)\n\n* Scripting ability (Bash, Python, C++ a plus)\n\n* Strong Experience with CI/CD processes (Jenkins, Ansible) and automated configuration tools (Puppet/Chef/Ansible)\n\n* Experience with container orchestration (AWS ECS, Kubernetes, Marathon/Mesos)\n\n* Ability to work as part of a highly collaborative team\n\n* Understanding of monitoring tools like DataDog\n\n* Strong written and verbal communication skills\n\n\n\n\nNice to Have\n\n\n* You knew exactly what is meant by "Turning pets into cattle"\n\n* Experience working with Kubernetes on bare-metal and/or the AWS Elastic Kubernetes Service.\n\n* Experience with RabbitMQ, MongoDB, or Apache Kafka.\n\n* Experience with Presto or Apache Spark.\n\n* Familiarity with computation orchestration tools such as HTCondor, Apache Airflow, or Argo.\n\n* Understanding of network concepts- OSI layers, firewalls, DNS, split horizon DNS, VPN, routing, BGP, etc.\n\n* A deep understanding of AWS IAM, and how it interacts with S3 buckets.\n\n* Experience with SAFe.\n\n* Strong programming skills in 2+ languages.\n\n\n

See more jobs at SecurityScorecard

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.