Job Board is hiring a Remote Software Engineer Cloud Infrastructure
\nBy making evidence the heart of security, we help customers stay ahead of ever-changing cyber-attacks. \n\nCorelight is the cybersecurity company that transforms network and cloud activity into evidence. Evidence that elite defenders use to proactively hunt for threats, accelerate response to cyber incidents, gain complete network visibility and create powerful analytics using machine-learning and behavioral analysis tools. Easily deployed, and available in traditional and SaaS-based formats, Corelight is the fastest-growing Network Detection and Response (NDR) platform in the industry. And we are the only NDR platform that leverages the power of Open Source projects in addition to our own technology to deliver Intrusion Detection (IDS), Network Security Monitoring (NSM), and Smart PCAP solutions. We sell to some of the most sensitive, mission critical large enterprises and government agencies in the world.\n\nThis position will perform functions essential to the health and growth of our product partners through monitoring automation, reducing toil, and building resilience into our infrastructure and APIโs. This role is an excellent opportunity for someone passionate and committed to designing, building, and maintaining high-performance Linux and cloud-based systems and communications infrastructure. \n\nThe core pillars of cloud operations are:\n\n\nMaintain and build external and internal cloud services achieving agreed-upon SLI, SLO, and SLA\n\nAssist in root administration of complex cloud environments (primarily AWS)\n\nEvangelize and implement best practices like Automation, Continuous Integration, and Deployment (CI/CD), Monitoring and Testing\n\nEncourage automated secret management\n\nBuild and maintain systems that are fault-tolerant and resilient.\n\n\n\n\nYour Role and Responsibilities\n\n\nParticipate in the design, development, testing, and maintenance of cloud services.\n\nHave knowledge and assist leadership in, account, and network administration best practices across all environments\n\nProvide advice and assistance on cloud architecture and APIs\n\nImplement automation, disaster recovery, and system resilience best practices.\n\nImplement improvements in architecture and design, facilitate and perform various tests and reviews of our code, products, services, and infrastructure.\n\n\n\n\nMinimum Qualifications\n\n\n3+ years of software engineering experience in Go or a similarly statically typed language. \n\n3+ years in operations engineering.\n\nExperience in application and database engineering for scale.\n\nExperience in application support practices and procedures for critical platforms.\n\nExperience in application monitoring and profiling.\n\nPractical Experience with Infrastructure as code such as Terraform and Ansible.\n\nFamiliarity with AWS, particularly Lambda, APIGW, S3, VPC, Route 53, IAM, and CloudFront; familiarity with AWS SDKs and the AWS-CLI.\n\nUnderstanding of networking and NetSec best practices.\n\nEffective communication skills, team spirit, problem-solving, positive attitude.\n\nBachelors or Masters degree in Computer Science or related fields, or equivalent experience.\n\n\n\n\nPreferred Skills\n\n\nExperience in scripting languages like Python and JavaScript. \n\nWorking knowledge of GCP and AZURE.\n\nFamiliarity with containerized architectures using Docker and Kubernetes.\n\nFamiliarity with machine learning infrastructure.\n\n\n\n\nWe are proud of our culture and values - driving diversity of background and thought, low-ego results, applied curiosity and tireless service to our customers and community. Corelight is committed to a geographically dispersed yet connected employee base with employees working from home and office locations around the world. Fueled by an accelerating revenue stream, and investments from top-tier venture capital organizations such as Crowdstrike, Accel and Insight - we are rapidly expanding our team. \n\nCheck us out at www.corelight.com \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Docker, Cloud, Engineer and Linux jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWe are seeking Data Engineers with a passion for sports to develop cloud-based data pipelines and automated data processing for our world-class sports intelligence platforms in baseball, basketball, cricket, eSports, football (American), golf, hockey, soccer, and tennis. Through your work, you can support the professional teams in our exclusive partner network in their efforts to compete and win championships. \n\nZelus Analytics is a fully remote company working directly with teams across the NBA, MLB, NFL, IPL and NHL, in addition to a number of soccer teams around the globe. Zelus unites a fast-growing startup environment with a research-focused culture that embraces our core values of integrity, innovation, and inclusion. We pride ourselves on providing meaningful mentorship that offers our team the opportunity to develop and expand their skill sets while also engaging with the broader analytics community. In so doing, we hope to create a new path for a broader group of highly talented people to push the cutting edge of sports analytics.\n\nWe believe that a diverse team is vital to building the worldโs best sports intelligence platform. Thus, we strongly encourage you to apply if you identify with any marginalized community across race, ethnicity, gender, sexual orientation, veteran status, or disability. At Zelus, we are committed to creating an inclusive environment where all of our employees are enabled and empowered to succeed and thrive.\n\nAs Zelus employees advance in experience and level, they are expected to build on their competencies and expertise and demonstrate increasing impact, independence, and leadership within their roles.\n\nMore specifically, as a Zelus Data Engineer, you will be expected to:\n\n\n* Design, develop, document, and maintain the schemas and ETL pipelines for our internal sports databases and data warehouses\n\n* Implement and test collection, mapping, and storage procedures for secure access to team, league, and third-party data sources\n\n* Develop algorithms for quality assurance and imputation to prepare data for exploratory analysis and quantitative modeling\n\n* Profile and optimize automated data processing tasks\n\n* Coordinate with data providers around planned changes to raw data feeds\n\n* Deploy and maintain system and database monitoring tools\n\n* Collaborate and communicate effectively in a distributed work environment\n\n* Fulfill other related duties and responsibilities, including rotating platform support\n\n\n\n\nAdditionally, a Data Engineer II will be expected to:\n\n\n* Create data ingestion and integration workflows that scale and can be easily adapted to future use cases\n\n* Assess, provision, monitor, and maintain the appropriate infrastructure and tooling to execute data engineering workflows\n\n\n\n\nAdditionally, a Senior Data Engineer will be expected to:\n\n\n* Research, design, and test generalizable software architectures for data ingestion, processing, and integration and guide organizational adoption\n\n* Collaborate with data science to design and implement vendor-agnostic data models that support downstream modeling efforts\n\n* Lead team-wide implementation of data engineering standards\n\n* Effectively communicate complex technical concepts to both internal and external audiences\n\n* Provide guidance and technical mentorship for junior engineers \n\n* Assist with recruiting and outreach for the engineering team, including building a diverse network of future candidates\n\n\n\n\nAdditionally, a Senior Data Engineer II will be expected to:\n\n\n* Identify and implement generalizable strategies for infrastructure maintenance and data-related cost savings\n\n* Break down complex data engineering projects into actionable work plans including proposed task assignments with clear design specifications\n\n* Assist in defining data engineering standards for the organization\n\n\n\n\nA qualified Data Engineer candidate will be able to demonstrate several of the following and will be excited to learn the rest through the mentorship provided at Zelus:\n\n\n* Academic and/or industry experience in back-end software design and development\n\n* Experience with ETL architecture and development in a cloud-based environment\n\n* Fluency in SQL development and an understanding of database and data warehousing technologies\n\n* Proficiency with Python (preferred), Scala, and/or other data-oriented programming languages\n\n* Experience with automated data quality validation across large data sets\n\n* Familiarity working with Linux servers in a virtualized/distributed environment\n\n* Strong software-engineering and problem-solving skills\n\n\n\n\nA qualified Senior Data Engineer candidate will be able to demonstrate all of the above at a higher level of competency plus the following:\n\n\n* Expertise developing complex databases and data warehouses for large-scale, cloud-based analytics systems\n\n* Experience with task orchestration and workflow automation tools\n\n* Experience building and overseeing team-wide data quality initiatives\n\n* Experience adapting, retraining, and retooling in a rapidly changing technology environment\n\n* Desire and ability to successfully mentor junior engineers\n\n\n\n\nStarting salaries range from*:\n\n\n* $87,000 to $102,000 for Data Engineer\n\n* $102,000 to $118,000 for Data Engineer II\n\n* $118,000 to $136,000 for Senior Data Engineer\n\n* $136,000 to $160,000 for Senior Data Engineer II\n\n\n\n\n*Compensation paid in non-US currency will be in a comparable range adjusted by differences in total cost of employment.\n\nZelus has a fully distributed workforce, spanning multiple states and countries, with a formal process for establishing compensation equity across its global staff. In addition to competitive salaries, our full-time compensation packages include equity grants and comprehensive benefits, such as an annual incentive bonus plan, supplemental health, vision, and dental insurance, and flexible PTO, all of which allow us to attract and retain a world-class team.\n\nAs an equal opportunity employer, Zelus does not discriminate on the basis of race, ethnicity, color, religion, creed, gender, gender expression or identification, sexual orientation, marital status, age, national origin, disability, genetic information, military status, or any other characteristic protected by law. It is our policy to provide reasonable accommodations for applicants and employees with disabilities. Please let us know if reasonable accommodation is needed to participate in the job application or interview process.\n\nIn most jurisdictions, Zelus is an at-will employer; employment at Zelus is for an indefinite period of time and is subject to termination by the employer or the employee at any time, with or without cause or notice. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, Senior, Engineer and Linux jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Coin Metrics is hiring a Remote INFRASTRUCTURE ENGINEER
\n\nCoin Metrics is a leading provider of cryptoasset data for institutions. We deliver transparent and actionable data and analytics to various industry stakeholders including asset managers, custodians, trading venues, research desks, and data/application providers. Coin Metricsโ data empowers its clients and the public to better understand, use and value open crypto networks.\n\nJoin a fast-paced startup pioneering novel metrics, data products, and intelligence solutions, which offer insights into the economics, markets, usage, health, and other aspects of public cryptocurrency blockchains like Bitcoin and Ethereum and other crypto networks.\n\nYou will be surrounded by talented people passionate about decentralized economies and the data behind them. Break new ground, create exciting new data-driven research and products, and help shape the future of finance.\nYOUR PURPOSE\n\nCoin Metrics is recruiting a Senior Infrastructure Engineer to support network nodes, market data collection, analytics factories, and data delivery operations.ย They will work with the infrastructure team to build out, maintain, and troubleshoot our rapidly expanding infrastructure.\nYOUR VALUE\n\n\n* Maintain blockchain nodes and monitor key areas such as network synchronization, connected peers, node configurations, and supporting infrastructure.\n\n* Maintain persistentย data stores by administering Postgres databases and Kafka streaming servers, especially with management of disk space, resiliency and backups.\n\n* Investigate alerts, reported issues, or engineer requests and respond promptly.\n\n* Plan and create Linux infrastructure on Bare-metal using Terraform and Ansible and vendor management tools.\n\n* Plan and create Kubernetes clusters and containers using Ansible and Helm.\n\n* Plan hybrid cloud architecture on AWS, GCP and Bare-metal using Terraform and Ansible.\n\n* Administer network capabilitiesย including CDN, DNS, proxies and web servers.\n\n* Design and support CI/CD pipelines using Bash, Python, Docker and other languages/tools.\n\n* Monitor infrastructure using tools like DataDog, Splunk, Prometheus and/or Grafana.\n\n\n\nYOUR EXPERTISE AND EXPERIENCE\n\n\n* 5+ years of direct experience in infrastructure, DevOps or back-end engineering roles with Linux.\n\n* Demonstrated ability to manage multiple projects, priorities and interruptions while communicating status and documenting activities appropriately. Correctly track details across multiple environments.\n\n* Experience with infrastructure as code and containerization software toolkits (Terraform, CloudFormation, Ansible, Helm, Docker, Kubernetes).\n\n* Experience configuring, running and maintaining full nodes of various networks, preferably in containers.ย Bonus points for bootstrapping node operations for enterprise clients and/or applications.\n\n* Experience with block file systems and database storage administration (Ext4, ZFS, Postgres, MySQL, Oracle). Bonus points for familiarity with file-based and object-based storage solutions.ย \n\n* Relevant work, or interest, in Infosec and DevSecOps. Specifically network theory fields, such as topology analysis. Bonus points for experience responding to audits and certifications.\n\n* Experience with scaling and migrating systems in a rapidly evolving environment that allows for little or no downtime, including a good understanding of incident management processes.\n\n* Bonus points if youโve managed at least one cloud infrastructure provider (AWS, Azure, GCP).\n\n\n\nLIFE AT COIN METRICSย \n\nCoin Metrics is a fun and fast-paced team with employees located across the globe.ย We are united by our OPEN (Open, Pioneering, Elucidating, and Neutral) core values. ย Our employees are empowered to do whatโs best for our products, customers, and team members. ย Other benefits of working at Coin Metrics include:\n\n\n* Competitive salary, 401(k) retirement plan or pension depending on location, bonus and options plans\n\n* Comprehensive medical, dental, vision (dependent on location)ย \n\n* Remote or hybrid work optionsย \n\n* Paid time off\n\n\n\n\nCoin Metrics is an employer committed to diversity in its workforce and is proud to be an Equal Opportunity Employer.ย All qualified applicants receive consideration for employment without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, sex, gender expression or identity, sexual orientation or any other basis protected by applicable law. For US applicants, you may view Pay Transparency, Employee Rights and Know Your Rights notices by clicking on their corresponding links.ย Additionally, Coin Metrics participates in the E-Verify program where applicable, as required by law.ย \n\nCoin Metrics is also committed to providing reasonable accommodations to individuals with disabilities. If you need reasonable accommodation because of a disability for any part of the employment process, please send an e-mail to [email protected] and let us know the nature of your request and your contact information. \n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Crypto, Ethereum, InfoSec, Docker, DevOps, Cloud, Node, Senior, Engineer and Linux jobs that are similar:\n\n
$57,500 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRemote- International
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Spotter is hiring a Remote Senior AWS Cloud Engineer
\nWhat Youโll Do:\n\nWeโre looking for a talented and intensely curious Senior AWS Cloud Engineer who is nimble and focused with a startup mentality. In this newly created role you will be the liaison between data engineers, data scientists and analytics engineers. You will work to create cutting-edge architecture that provides the increased performance, scalability and concurrency for Data Science and Analytics workflows.\n\n Responsibilities\n\n\n* Provide AWS Infrastructure support and Systems Administration in support of new and existing products implemented thru: IAM, EC2, S3, AWS Networking (VPC, IGW, NGW, ALB, NLB, etc.), Terraform, Cloud Formation templates and Security: Security Groups, Guard Duty, Cloud Trail, Config and WAF.\n\n* Monitor and maintain production, development, and QA cloud infrastructure resources for compliance with all Six Pillars of AWS Well-Architected Framework - including Security Pilar.\n\n* Develop and maintain Continuous Integration (CI) and Continuous Deployment (CD) pipelines needed to automate testing and deployment of all production software components as part of a fast-paced, agile Engineering team. Technologies required: ElastiCache, Bitbucket Pipelines, Github, Docker Compose, Kubernetes, Amazon Elastic Kubernetes Service (EKS), Amazon Elastic Container Service (ECS) and Linux based server instances.\n\n* Develop and maintain Infrastructure as Code (IaC) services for creation of ephemeral cloud-native infrastructure hosted on Amazon Web Services (AWS) and Google Cloud Platform (GCP). Technologies required: AWS AWS Cloud Formation, Google Cloud Deployment Manager, AWS SSM, YAML, JSON, Python.\n\n* Manage, maintain, and monitor all cloud-based infrastructure hosted on Amazon Web Services (AWS) needed to ensure 99.99% uptime. Technologies required: AWS IAM, AWS Cloud Watch, AWS Event Bridge, AWS SSM, AWS SQS, AWS SNS, AWS Lambda and Step Functions, Python, Java, RDS Postgres, RDS MySQL, AWS S3, Docker, AWS Elasticsearch, Kibana, AWS Amplify.\n\n* Manage, maintain, and monitor all cloud-based infrastructure hosted on Amazon Web Services (AWS) needed to ensure 100% cybersecurity compliance and surveillance. Technologies required: AWS SSM, YAML, JSON, Python, RDS Postgres, Tenable, CrowdStrike EPP, Sophos EPP, Wiz CSPM, Linux Bash scripts.\n\n* Design and code technical solutions that improve the scalability, performance, and reliability of all Data Acquisition pipelines. Technologies required: Google ADs APIs, Youtube Data APIs, Python, Java, AWS Glue, AWS S3, AWS SNS, AWS SQS, AWS KMS, AWS RDS Postgres, AWS RDS MySQL, AWS Redshift.\n\n* Monitor and remediate server and application security events as reported by CrowdStrike EPP, Tenable, WIZ CSPM, Invicti\n\n\n\n\nWho you are:\n\n\n* Minimum of 5 years of System Administration or Devops Engineering experience on AWS\n\n* Track record of success in System Administration, including System Design, Configuration, Maintenance, and Upgrades\n\n* Excels in architecting, designing, developing, and implementing cloud native AWS platforms and services.\n\n* Knowledgeable in managing cloud infrastructure in a production environment to ensure high availability and reliability.\n\n* Proficient in automating system deployment, operation, and maintenance using Infrastructure as Code - Ansible, Terraform, CloudFormation, and other common DevOps tools and scripting.\n\n* Experienced with Agile processes in a structured setting required; Scrum and/or Kanban.\n\n* Security and compliance standards experience such as PCI and SOC as well as data privacy and protection standards, a big plus.\n\n* Experienced in implementing Dashboards and data for decision-making related to team and system performance, rely heavily on telemetry and monitoring.\n\n* Exceptional analytical capabilities.\n\n* Strong communication skills and ability to effectively interact with Engineering and Business Stakeholders.\n\n\n\n\nPreferred Qualifications:\n\n\n* Bachelor's degree in technology, engineering, or related field\n\n* AWS Certifications โ Solutions Architect, DevOps Engineer etc.\n\n\n\n\nWhy Spotter:\n\n\n* Medical and vision insurance covered up to 100%\n\n* Dental insurance\n\n* 401(k) matching\n\n* Stock options\n\n* Complimentary gym access\n\n* Autonomy and upward mobility\n\n* Diverse, equitable, and inclusive culture, where your voice matters\n\n\n\n\nIn compliance with local law, we are disclosing the compensation, or a range thereof, for roles that will be performed in Culver City. Actual salaries will vary and may be above or below the range based on various factors including but not limited to skill sets; experience and training; licensure and certifications; and other business and organizational needs. A reasonable estimate of the current pay range is: $100-$500K salary per year. The range listed is just one component of Spotterโs total compensation package for employees. Other rewards may include an annual discretionary bonus and equity. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Docker, Testing, DevOps, Cloud, Senior, Engineer and Linux jobs that are similar:\n\n
$50,000 — $80,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nLos Angeles, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Coin Metrics is hiring a Remote INFRASTRUCTURE ENGINEER
\n\nCoin Metrics is a leading provider of cryptoasset data for institutions. We deliver transparent and actionable data and analytics to various industry stakeholders including asset managers, custodians, trading venues, research desks, and data/application providers. Coin Metricsโ data empowers its clients and the public to better understand, use and value open crypto networks.\n\nJoin a fast-paced startup pioneering novel metrics, data products, and intelligence solutions, which offer insights into the economics, markets, usage, health, and other aspects of public cryptocurrency blockchains like Bitcoin and Ethereum and other crypto networks.\n\nYou will be surrounded by talented people passionate about decentralized economies and the data behind them. Break new ground, create exciting new data-driven research and products, and help shape the future of finance.\nYOUR PURPOSE\n\nCoin Metrics is recruiting a Senior Infrastructure Engineer to support network nodes, market data collection, analytics factories, and data delivery operations. They will work with the infrastructure team to build out, maintain, and troubleshoot our rapidly expanding infrastructure.\nYOUR VALUE\n\n\n* Maintain blockchain nodes and monitor key areas such as network synchronization, connected peers, node configurations, and supporting infrastructure.\n\n* Maintain persistent data stores by administering Postgres databases and Kafka streaming servers, especially with management of disk space, resiliency and backups.\n\n* Investigate alerts, reported issues, or engineer requests and respond promptly.\n\n* Plan and create Linux infrastructure on Bare-metal using Terraform and Ansible and vendor management tools.\n\n* Plan and create Kubernetes clusters and containers using Ansible and Helm.\n\n* Plan hybrid cloud architecture on AWS, GCP and Bare-metal using Terraform and Ansible.\n\n* Administer network capabilities including CDN, DNS, proxies and web servers.\n\n* Design and support CI/CD pipelines using Bash, Python, Docker and other languages/tools.\n\n* Monitor infrastructure using tools like DataDog, Splunk, Prometheus and/or Grafana.\n\n\n\nYOUR EXPERTISE AND EXPERIENCE\n\n\n* 5+ years of direct experience in infrastructure, DevOps or back-end engineering roles with Linux.\n\n* Demonstrated ability to manage multiple projects, priorities and interruptions while communicating status and documenting activities appropriately. Correctly track details across multiple environments.\n\n* Experience with infrastructure as code and containerization software toolkits (Terraform, CloudFormation, Ansible, Helm, Docker, Kubernetes).\n\n* Experience configuring, running and maintaining full nodes of various networks, preferably in containers. Bonus points for bootstrapping node operations for enterprise clients and/or applications.\n\n* Experience with block file systems and database storage administration (Ext4, ZFS, Postgres, MySQL, Oracle). Bonus points for familiarity with file-based and object-based storage solutions. \n\n* Relevant work, or interest, in Infosec and DevSecOps. Specifically network theory fields, such as topology analysis. Bonus points for experience responding to audits and certifications.\n\n* Experience with scaling and migrating systems in a rapidly evolving environment that allows for little or no downtime, including a good understanding of incident management processes.\n\n* Bonus points if youโve managed at least one cloud infrastructure provider (AWS, Azure, GCP).\n\n\n\nLIFE AT COIN METRICS \n\nCoin Metrics is a fun and fast-paced team with employees located across the globe. We are united by our OPEN (Open, Pioneering, Elucidating, and Neutral) core values. Our employees are empowered to do whatโs best for our products, customers, and team members. Other benefits of working at Coin Metrics include:\n\n\n* Competitive salary, 401(k) retirement plan or pension depending on location, bonus and options plans\n\n* Comprehensive medical, dental, vision (dependent on location) \n\n* Remote or hybrid work options \n\n* Paid time off\n\n\n\n\nCoin Metrics is an employer committed to diversity in its workforce and is proud to be an Equal Opportunity Employer. All qualified applicants receive consideration for employment without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, sex, gender expression or identity, sexual orientation or any other basis protected by applicable law. For US applicants, you may view Pay Transparency, Employee Rights and Know Your Rights notices by clicking on their corresponding links. Additionally, Coin Metrics participates in the E-Verify program where applicable, as required by law. \n\nCoin Metrics is also committed to providing reasonable accommodations to individuals with disabilities. If you need reasonable accommodation because of a disability for any part of the employment process, please send an e-mail to [email protected] and let us know the nature of your request and your contact information. \n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Crypto, Bitcoin, Ethereum, InfoSec, Docker, DevOps, Cloud, Node, Senior, Engineer and Linux jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRemote- International
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Chan Zuckerberg Biohub - San Francisco is hiring a Remote AI ML HPC Principal Engineer
\nThe Opportunity\n\nThe Chan Zuckerberg Biohub Network has an immediate opening for an AI/ML High Performance Computing (HPC) Principal Engineer. The CZ Biohub Network is composed of several new institutes that the Chan Zuckerberg Initiative created to do great science that cannot be done in conventional environments. The CZ Biohub Network brings together researchers from across disciplines to pursue audacious, important scientific challenges. The Network consists of four institutes throughout the country; San Francisco, Silicon Valley, Chicago and New York City. Each institute closely collaborates with the major universities in its local area. Along with the world-class engineering team at the Chan Zuckerberg Initiative, the CZ Biohub supports several 100 of the brightest, boldest engineers, data scientists, and biomedical researchers in the country, with the mission of understanding the mysteries of the cell and how cells interact within systems.\n\nThe Biohub is expanding its global scientific leadership, particularly in the area of AI/ML, with the acquisition of the largest GPU cluster dedicated to AI for biology. The AI/ML HPC Principal Engineer will be tasked with helping to realize the full potential of this capability in addition to providing advanced computing capabilities and consulting support to science and technical programs. This position will work closely with many different science teams simultaneously to translate experimental descriptions into software and hardware requirements and across all phases of the scientific lifecycle, including data ingest, analysis, management and storage, computation, authentication, tool development and many other computing needs expressed by scientific projects.\n\nThis position reports to the Director for Scientific Computing and will be hired at a level commensurate with the skills, knowledge, and abilities of the successful candidate.\n\nWhat You'll Do\n\n\n* Work with a wide community of scientific disciplinary experts to identify emerging and essential information technology needs and translate those needs into information technology requirements\n\n* Build an on-prem HPC infrastructure supplemented with cloud computing to support the expanding IT needs of the Biohub\n\n* Support the efficiency and effectiveness of capabilities for data ingest, data analysis, data management, data storage, computation, identity management, and many other IT needs expressed by scientific projects\n\n* Plan, organize, track and execute projects\n\n* Foster cross-domain community and knowledge-sharing between science teams with similar IT challenges\n\n* Research, evaluate and implement new technologies on a wide range of scientific compute, storage, networking, and data analytics capabilities\n\n* Promote and assist researchers with the use of Cloud Compute Services (AWS, GCP primarily) containerization tools, etc. to scientific clients and research groups\n\n* Work on problems of diverse scope where analysis of data requires evaluation of identifiable factors\n\n* Assist in cost & schedule estimation for the IT needs of scientists, as part of supporting architecture development and scientific program execution\n\n* Support Machine Learning capability growth at the CZ Biohub\n\n* Provide scientist support in deployment and maintenance of developed tools\n\n* Plan and execute all above responsibilities independently with minimal intervention\n\n\n\n\nWhat You'll Bring \n\nEssential โ\n\n\n* Bachelorโs Degree in Biology or Life Sciences is preferred. Degrees in Computer Science, Mathematics, Systems Engineering or a related field or equivalent training/experience also acceptable.\n\n* A minimum of 8 years of experience designing and building web-based working projects using modern languages, tools, and frameworks\n\n* Experience building on-prem HPC infrastructure and capacity planning\n\n* Experience and expertise working on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors\n\n* Experience supporting scientific facilities, and prior knowledge of scientific user needs, program management, data management planning or lab-bench IT needs\n\n* Experience with HPC and cloud computing environments\n\n* Ability to interact with a variety of technical and scientific personnel with varied academic backgrounds\n\n* Strong written and verbal communication skills to present and disseminate scientific software developments at group meetings\n\n* Demonstrated ability to reason clearly about load, latency, bandwidth, performance, reliability, and cost and make sound engineering decisions balancing them\n\n* Demonstrated ability to quickly and creatively implement novel solutions and ideas\n\n\n\n\nTechnical experience includes - \n\n\n* Proven ability to analyze, troubleshoot, and resolve complex problems that arise in the HPC production compute, interconnect, storage hardware, software systems, storage subsystems\n\n* Configuring and administering parallel, network attached storage (Lustre, GPFS on ESS, NFS, Ceph) and storage subsystems (e.g. IBM, NetApp, DataDirect Network, LSI, VAST, etc.)\n\n* Installing, configuring, and maintaining job management tools (such as SLURM, Moab, TORQUE, PBS, etc.) and implementing fairshare, node sharing, backfill etc.. for compute and GPUs\n\n* Red Hat Enterprise Linux, CentOS, or derivatives and Linux services and technologies like dnsmasq, systemd, LDAP, PAM, sssd, OpenSSH, cgroups\n\n* Scripting languages (including Bash, Python, or Perl)\n\n* OpenACC, nvhpc, understanding of cuda driver compatibility issues\n\n* Virtualization (ESXi or KVM/libvirt), containerization (Docker or Singularity), configuration management and automation (tools like xCAT, Puppet, kickstart) and orchestration (Kubernetes, docker-compose, CloudFormation, Terraform.)\n\n* High performance networking technologies (Ethernet and Infiniband) and hardware (Mellanox and Juniper)\n\n* Configuring, installing, tuning and maintaining scientific application software (Modules, SPACK)\n\n* Familiarity with source control tools (Git or SVN)\n\n* Experience with supporting use of popular ML frameworks such as Pytorch, Tensorflow\n\n* Familiarity with cybersecurity tools, methodologies, and best practices for protecting systems used for science\n\n* Experience with movement, storage, backup and archive of large scale data\n\n\n\n\nNice to have - \n\n\n* An advanced degree is strongly desired\n\n\n\n\nThe Chan Zuckerberg Biohub requires all employees, contractors, and interns, regardless of work location or type of role, to provide proof of full COVID-19 vaccination, including a booster vaccine dose, if eligible, by their start date. Those who are unable to get vaccinated or obtain a booster dose because of a disability, or who choose not to be vaccinated due to a sincerely held religious belief, practice, or observance must have an approved exception prior to their start date.\n\nCompensation \n\n\n* $212,000 - $291,500\n\n\n\n\nNew hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. To determine starting pay, we consider multiple job-related factors including a candidateโs skills, education and experience, market demand, business needs, and internal parity. We may also adjust this range in the future based on market data. Your recruiter can share more about the specific pay range during the hiring process. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Consulting, Education, Cloud, Node, Engineer and Linux jobs that are similar:\n\n
$57,500 — $85,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Chan Zuckerberg Biohub - San Francisco is hiring a Remote HPC Principal Engineer
\nThe Opportunity\n\nThe Chan Zuckerberg Biohub has an immediate opening for a High Performance Computing (HPC) Principal Engineer. The CZ Biohub is a one-of-a-kind independent non-profit research institute that brings together three leading universities - Stanford, UC Berkeley, and UC San Francisco - into a single collaborative technology and discovery engine. Along with the world-class engineering team at the Chan Zuckerberg Initiative, the CZ Biohub supports over 100 of the brightest, boldest engineers, data scientists, and biomedical researchers in the Bay Area, with the mission of understanding the underlying mechanisms of disease through the development of tools and technologies and the application to therapeutics and diagnostics.\n\nThis position will be tasked with strengthening and expanding the scientific computational capacity to further the Biohubโs expanding global scientific leadership. The HPC Principal Engineer will also provide IT capabilities and consulting support to science and technical programs. This position will work closely with many different science teams simultaneously to translate experimental descriptions into software and hardware requirements and across all phases of the scientific lifecycle, including data ingest, analysis, management and storage, computation, authentication, tool development and many other IT needs expressed by scientific projects.\n\nThis position reports to the Director for Scientific Computing and will be hired at a level commensurate with the skills, knowledge, and abilities of the successful candidate.\n\nWhat You'll Do\n\n\n* Work with a wide community of scientific disciplinary experts to identify emerging and essential information technology needs and translate those needs into information technology requirements\n\n* Build an on-prem HPC infrastructure supplemented with cloud computing to support the expanding IT needs of the Biohub\n\n* Support the efficiency and effectiveness of capabilities for data ingest, data analysis, data management, data storage, computation, identity management, and many other IT needs expressed by scientific projects\n\n* Plan, organize, track and execute projects\n\n* Foster cross-domain community and knowledge-sharing between science teams with similar IT challenges\n\n* Research, evaluate and implement new technologies on a wide range of scientific compute, storage, networking, and data analytics capabilities\n\n* Promote and assist researchers with the use of Cloud Compute Services (AWS, GCP primarily) containerization tools, etc. to scientific clients and research groups\n\n* Work on problems of diverse scope where analysis of data requires evaluation of identifiable factors\n\n* Assist in cost & schedule estimation for the IT needs of scientists, as part of supporting architecture development and scientific program execution\n\n* Support Machine Learning capability growth at the CZ Biohub\n\n* Provide scientist support in deployment and maintenance of developed tools\n\n* Plan and execute all above responsibilities independently with minimal intervention\n\n\n\n\nWhat You'll Bring \n\nEssential โ\n\n\n* Bachelorโs Degree in Biology or Life Sciences is preferred. Degrees in Computer Science, Mathematics, Systems Engineering or a related field or equivalent training/experience also acceptable. An advanced degree is strongly desired.\n\n* A minimum of 8 years of experience designing and building web-based working projects using modern languages, tools, and frameworks\n\n* Experience building on-prem HPC infrastructure and capacity planning\n\n* Experience and expertise working on complex issues where analysis of situations or data requires an in-depth evaluation of variable factors\n\n* Experience supporting scientific facilities, and prior knowledge of scientific user needs, program management, data management planning or lab-bench IT needs\n\n* Experience with HPC and cloud computing environments\n\n* Ability to interact with a variety of technical and scientific personnel with varied academic backgrounds\n\n* Strong written and verbal communication skills to present and disseminate scientific software developments at group meetings\n\n* Demonstrated ability to reason clearly about load, latency, bandwidth, performance, reliability, and cost and make sound engineering decisions balancing them\n\n* Demonstrated ability to quickly and creatively to implement novel solutions and ideas\n\n\n\n\nTechnical experience includes - \n\n\n* Proven ability to analyze, troubleshoot, and resolve complex problems that arise in the HPC production storage hardware, software systems, storage networks and systems\n\n* Configuring and administering parallel, network attached storage (Lustre, NFS, ESS, Ceph) and storage subsystems (e.g. IBM, NetApp, DataDirect Network, LSI, etc.)\n\n* Installing, configuring, and maintaining job management tools (such as SLURM, Moab, TORQUE, PBS, etc.)\nRed Hat Enterprise Linux, CentOS, or derivatives and Linux services and technologies like dnsmasq, systemd, LDAP, PAM, sssd, OpenSSH, cgroups\n\n* Scripting languages (including Bash, Python, or Perl)\n\n* Virtualization (ESXi or KVM/libvirt), containerization (Docker or Singularity), configuration management and automation (tools like xCAT, Puppet, kickstart) and orchestration (Kubernetes, docker-compose, CloudFormation, Terraform.)\n\n* High performance networking technologies (Ethernet and Infiniband) and hardware (Mellanox and Juniper)\n\n* Configuring, installing, tuning and maintaining scientific application software\n\n* Familiarity with source control tools (Git or SVN)\n\n\n\n\nThe Chan Zuckerberg Biohub requires all employees, contractors, and interns, regardless of work location or type of role, to provide proof of full COVID-19 vaccination, including a booster vaccine dose, if eligible, by their start date. Those who are unable to get vaccinated or obtain a booster dose because of a disability, or who choose not to be vaccinated due to a sincerely held religious belief, practice, or observance must have an approved exception prior to their start date.\n\nCompensation \n\n\n* Principal Engineer = $212,000 - $291,500\n\n\n\n\nNew hires are typically hired into the lower portion of the range, enabling employee growth in the range over time. To determine starting pay, we consider multiple job-related factors including a candidateโs skills, education and experience, market demand, business needs, and internal parity. We may also adjust this range in the future based on market data. Your recruiter can share more about the specific pay range during the hiring process. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Consulting, Education, Cloud, Engineer and Linux jobs that are similar:\n\n
$50,000 — $85,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
This job post is closed and the position is probably filled. Please do not apply. Work for Nethermind and want to re-open this job? Use the edit link in the email when you posted the job!
๐ค Closed by robot after apply link errored w/ code 403 11 months ago
\nWhat are we all about?\n\nWe are a team of world class builders and researchers with expertise across several domains: Ethereum Protocol Engineering, Layer-2, Decentralized Finance (DeFi), Miner Extractable Value (MEV), Smart Contract Development, Security Auditing and Formal Verification.\n\nWorking to solve some of the most challenging problems in the blockchain space, we frequently collaborate with renowned companies, such as Ethereum Foundation, StarkWare, Gnosis Chain, Aave, Flashbots, xDai, Open Zeppelin, Forta Protocol, Energy Web, POA Network and many more.\n\nWe actively contribute to Ethereum core development, EIP's and network upgrades together with the Ethereum Foundation, and other client teams.\n\n\n\n\n\n\n\nToday, there are nearly 200 of us working remotely from over 45+ countries.\n\n\n\n\n\n\n\n\n \n\n\nYou can view all our open positions here: https://jobs.nethermind.io/\n\n\n\nAre you the one?\n\nThe start-up is a non-custodial, institutional grade, staking as a service provider. The business operates validators across a wide range of Proof-of-Stake protocols and allows institutional clients to delegate their assets using their preferred custodians in order to collect staking rewards. It charges a commission on the staking rewards for operating the validators and providing analytics and reporting. The business' differentiation is based on:\n\n\n* Institutional grade security and compliance (e.g. permissioned validators)\n\n* Superior transparency and analytics, including dashboard and reporting\n\n* Processes designed with institutional workflow in mind\n\n\n\n\nThe role\n\nAs SRE/System engineer you will be responsible for running hybrid infrastructure. You are ready to apply best practices for monitoring, observability, security, and infrastructure automation to the task of improving and expanding the multi-cloud hybrid platform used by our growing set of deployments.\n\nResponsibilities:\n\n\n* Responsible for the various processes on infrastructure environments on different operating systems and platforms (AWS, Azure, GCP, Private Clouds and On-Prem).\n\n* Be responsible for evaluating the business needs and producing various designs to achieve the assigned projects.\n\n* Provide systems expertise and drive operational best practices. Responsible for setting up and maintaining performance system monitoring.\n\n* Work with colleagues throughout the organization to build a best-in-class hybrid platform.\n\n* Automation - Build automation processes and help drive the adoption of automated process deployment practices throughout all Infrastructure components.\n\n* Provide support function as required.\n\n* Participate in the on-call rota.\n\n\n\n\nIn this role, we need you to have experience in(you should have):\n\n\n* IAC experience running on different platforms (AWS, Azure, GCP, Private Clouds, and On-Prem).\n\n* Monitoring and maintaining Windows and Linux systems.\n\n* Server installation, configuration, and maintenance.\n\n* Networking, next-generation firewalls installation, maintenance, and configuration.\n\n* Design and Implementation with high availability.\n\n* Performing proactive analysis of infrastructure capacity and performance.\n\n* Performing system backup and recovery.\n\n* Ensuring security systems/appliances are functional and improved upon for proactive cyber defence.\n\n* Developing process automation.\n\n* Act as a role model for technical competence, helpfulness, facilitation of learning, and teamwork.\n\n\n\n\nNice to have skills\n\n\n* Expertise in data centre management will be preferred.\n\n* Experience with Security technologies such as Fortinet, Teleport, SSL certificates and PKI management etc.\n\n* Hands-on experience with networks, network administration, and network installation.\n\n* Hands-on knowledge of container/container orchestration technology with Docker and/or Kubernetes.\n\n* A background with CI/CD tools like Octopus, Bamboo, Jenkins, GitLab, TeamCity.\n\n* Scripting Proficiency in Bash, PowerShell, Python, Perl or others.\n\n\n\nPerks and benefits:\n\n\n* Fully remote\n\n* Flexible working hours\n\n* Plus equity\n\n\n\nJoin us!\n\nWe are always on the lookout for talent!\n\nIf what we do excites you, but none of the current open positions match your background, we encourage you to send us your CV at [email protected]\n\n\nJoin our growing and active community of 2000+ developers on our Discord server: https://discord.com/invite/PaCMRFdvWT\n\n\nIn the meantime, keep up to date on what we are working on by following us on our social channels: \n\n\nhttps://twitter.com/nethermindeth\nhttps://www.linkedin.com/company/nethermind/\n\nClick here to view our Privacy Policy.\n\n\n\n \n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Ethereum, Docker, Finance, Engineer and Linux jobs that are similar:\n\n
$75,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nWorldwide
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.