Phaidra is hiring a Remote Staff Software Engineer
About Phaidra\n\nPhaidra is building the future of industrial automation.\n\nThe world today is filled with static, monolithic infrastructure. Factories, power plants, buildings, etc. operate the same they've operated for decades โ because the controls programming is hard-coded. Thousands of lines of rules and heuristics that define how the machines interact with each other. The result of all this hard-coding is that facilities are frozen in time, unable to adapt to their environment while their performance slowly degrades.\n\nPhaidra creates AI-powered control systems for the industrial sector, enabling industrial facilities to automatically learn and improve over time. Specifically:\n\n\n* We use reinforcement learning algorithms to provide this intelligence, converting raw sensor data into high-value actions and decisions.\n\n* We focus on industrial applications, which tend to be well-sensorized with measurable KPIs โ perfect for reinforcement learning.\n\n* We enable domain experts (our users) to configure the AI control systems (i.e. agents) without writing code. They define what they want their AI agents to do, and we do it for them.\n\n\n\n\nOur team has a track record of applying AI to some of the toughest problems. From achieving superhuman performance with DeepMind's AlphaGo, to reducing the energy required to cool Google's Data Centers by 40%, we deeply understand AI and how to apply it in production for massive impact.\n\nPhaidra is based in the USA but 100% remote; we do not have a physical office. We hire employees internationally with the help of our partner, OysterHR. Our team is currently located throughout the USA, Canada, UK, Norway, Italy, Spain, Portugal, and India.\n\n**Please only apply to one opening. If you are a better fit for another opening, our team will move your application. Candidates who apply to multiple openings will not be considered.**\nWho You Are\n\nWe are looking for a very experienced Software Engineer with a focus on MLOps tech leadership to be a part of our growing AI Platform team. You are bold and creative, and have deep empathy for customers. You will design and implement significant parts of the code base and will have the opportunity to make an immediate impact with your work and guide the product and team as we grow.\n\nYou are curious and like to understand technologies and their tradeoffs in depth - providing technical guidance to the team and peers as and when required. Leading by example, you have accumulated a wealth of insights and experiences from your hands-on involvement in the field, and you are committed to rolling up your sleeves and getting work done. You like joining and supporting other engineers in their work to learn from them as well as letting them benefit from your expertise and experience.\n\nYou have the motivation and skills to identify technical product needs, initiate projects and owning their delivery, including the involvement of engineering peers as needed. You are comfortable with challenging the status quo respectfully to drive and deliver technical excellence in the team.\n\n\n* We are seeking a team member located within one of the following areas: USA/Canada/UK/EU\n\n\n\nResponsibilities\n\nThe AI Platform team you are joining is responsible for building the core platform that powers model training, inference and decision making in our products. Furthermore the team owns MLOps and the services hosting our AI capabilities. Productionizing results from Research, as well as extending our systems and providing support according to our customer needs fall into team responsibilities as well. You will join this team as a very experienced engineer with a focus on MLOps solutions to grow our expertise in that area, but also contribute as a software engineer more widely in the team.\n\nAs an organization, we strongly believe in expertise across the stack. As such, you will experience flavors of Machine Learning, Software Engineering, Distributed Systems, MLOps and DevOps.\n\nIn particular, you will:\n\n\n* Design, build and lead the MLOps initiatives and vision for the AI Platform to strengthen automation, orchestration, versioning, observability, monitoring and collaboration for the platform.\n\n* Build and design scalable components for the AI Platform to allow high throughput training and inference for RL agents doing realtime inference for autonomous control of industrial systems.\n\n* Contribute to the design and implementation of the product backend by writing REST & gRPC API services and scalable event-driven backend applications.\n\n* Design clear, extensible software interfaces for the team's customers and maintain a high release quality bar.\n\n* Perform DevOps duties of CI/CD, Release & Deployment management.\n\n* Be a part of our global production oncall team and, own & operate your services in production, meeting Phaidraโs high bar for operational excellence.\n\n* Lead cross-functional initiatives collaborating with engineers, product managers and TPM across teams.\n\n* Mentor your peers and be a technical role-model in the team.\n\n\n\nOnboarding\n\nIn your first 30 daysโฆ\n\n\n* You will be immersed in an onboarding program that introduces you to Phaidra and our product.\n\n* You will spend time in the Engineering org, learning how the teams operate, interact, and approach problems.\n\n* You will read various parts of our handbook and familiarize yourself with the documentation culture at Phaidra.\n\n* You will set up your development environment and start working on an onboarding exercise that will introduce you to various parts of our code base.\n\n* You will learn about how we use agile and be able to navigate our sprint boards and backlogs.\n\n* You will learn about various team standards and development & release processes.\n\n* You will start to learn about our system architecture and infrastructure.\n\n* You will start picking up few good โfirst-tasksโ to get yourself accustomed to the end to end release flow.\n\n\n\n\nIn your first 60 daysโฆ\n\n\n* You will get a solid understanding of what Phaidra does and how we do it.\n\n* You will meet with team members across Phaidra and started building relationships that will help you be successful at your job.\n\n* You will complete the onboarding exercise and will be on your way to completing your first production task.\n\n* You will take ownership for the MLOps work on the team, identify gaps and propose roadmap items on the topic.\n\n\n\n\nIn your first 90 daysโฆ\n\n\n* You will be fully integrated in the team and with team members across the company.\n\n* You will have a more in-depth understanding of our system architecture and infrastructure.\n\n* You will complete your first on-call experience helping monitor and improve our production environments.\n\n* You will become an expert with our tooling.\n\n* You will start to contribute to knowledge sharing throughout Phaidra and the team.\n\n* You will take proactively drive MLOps topics in the team and represent it technically throughout the company.\n\n\n\nKey Qualifications\n\n\n* 10+ years of work experience.\n\n* Proven record on impact as a Tech Leader and bar-raiser for ambitious Software Engineering teams\n\n* Strong experience on designing and implementing MLOps solutions for AI production systems\n\n* Extensive experience with platform Software Engineering with the ability to contribute on all levels as an individual contributor and tech leader\n\n* Strong expertise on building, operating and monitoring large scale multi-tenant systems with high availability, fault tolerance, performance tuning, monitoring, and metrics collection\n\n* Ability to take ownership of realtime production systems - aligning technical with business requirements, raising the bar for operational excellence and on-call incident handling\n\n* Strong expertise in Python and Cloud environments\n\n* Very good grasp of Machine Learning (especially Deep Learning) fundamentals\n\n* Ability to collaborate and communicate effectively in an all-remote setting\n\n* Doing your work with curiosity, ownership, transparency & directness, outcome orientation, and customer empathy.\n\n\n\nBonus\n\n\n* Experience with building applications that can be deployed in cloud, as well as in hybrid or on-prem environment\n\n* Exposure to Reinforcement Learning or other in-depth knowledge on modern ML applications\n\n* Experience with industrial applications, industrial control systems, IoT, sensor time series applications, or similar\n\n\n\nRelevant Technologies from our Stack\n\n\n* Python, Go\n\n* PyTorch, PyTorch Lightning\n\n* Ray.io, Prefect, mlflow\n\n* REST & gRPC micro-services\n\n* Docker, Kubernetes, Terraform & Kapitan\n\n* GCP - GKE, PubSub, CloudSQL, BigTable, Postgres, etc.\n\n* Grafana Cloud, Prometheus\n\n* Poetry, Pants\n\n* Gitlab CI, ArgoCD, Atlantis\n\n\n\nGeneral Interview Process\n\nAll of our interviews are held via Google Meet, and an active camera connection is required.\n\n* \n\nMeeting with Operations (30 minutes): The purpose of this interview is to meet you, learn more about your background, discuss what you are looking for in a new position and cover formalities around your application.\n\n\n* \n\nTech Lead interview (60 minutes): This interview is a combination of technical and cultural fit assessment. We will cover your technical experience and the skills as an engineer and a tech lead while discussing projects that you have worked on in the past. You will meet the manager for the role as well as our VP of Engineering, with the opportunity to ask any questions about the team, role and engineering at Phaidra.\n\n\n* \n\nML system design & SRE (90 minutes): In this interview, we will go over a real world MLOps problem. You can expect to draw architecture diagrams using boxes & arrows in your browser. We will talk about system design, scalability and monitoring\n\n\n* \n\nML interview (60 minutes): This interview will focus on Machine Learning approaches, algorithms and theory. You will be asked about ML algorithms you are familiar with, how they work under the hood and how to use them in an applied setting.\n\n\n* \n\nCulture fit interview with Phaidraโs co-founders (30 minutes): This interview focuses on alignment with Phaidraโs values and the mutual cultural fit.\n\n\n\nBase Salary\n\n\n* US Residents: $156,000-$234,000/year\n\n* UK Residents: ยฃ108,000-ยฃ162,000/year\n\n\n\n\nSalary ranges for EU countries will vary based on the market rate for the location.\n\nThis position will also include equity.\n\nThese are best faith estimates of the base salary range for this position. Multiple factors such as experience, education, level, and location are taken into account when determining compensation.\nBenefits & Perks\n\n\n* Fast-paced and team-oriented environment where you will be instrumental in the direction of the company.\n\n* Phaidra is a 100% remote company with a digital nomad policy.\n\n* Competitive compensation & equity.\n\n* Outsized responsibilities & professional development.\n\n* Training is foundational; functional, customer immersion, and development training.\n\n* Medical, dental, and vision insurance (exact benefits vary by region).\n\n* Unlimited paid time off, with a minimum of 20 days off per year requirement.\n\n* Paid parental leave (exact benefits vary by region).\n\n* Home office setup allowance and company MacBook.\n\n* Monthly remote work stipend.\n\n\n\nOn being Remote\n\nWe are thoughtful about remote collaboration. We look to the pioneers - like Gitlab - for inspiration and best practices to create a stellar remote work environment. We have a documentation-first culture and actively practice asynchronous communication in everything we do. Our team stays connected through tools like Slack and video chat. Most teams meet daily, and we have dedicated all-hands meetings bi-weekly to build strong relationships. We hold virtual team building events once per month - and even hold virtual socials to watch rocket launches! We have a yearly in-person, all-company summit in locations like Seattle, Athens, Goa, and Barcelona.\nEqual Opportunity Employment\n\nPhaidra is an Equal Opportunity Employer; employment with Phaidra is governed on the basis of merit, competence, and qualifications and will not be influenced in any manner by race, color, religion, gender, national origin/ethnicity, veteran status, disability status, age, sexual orientation, gender identity, marital status, mental or physical disability, or any other legally protected status. We welcome diversity and strive to maintain an inclusive environment for all employees. If you need assistance with completing the application process, please contact us at [email protected].\nE-Verify Notice\n\nPhaidra participates in E-Verify, an employment authorization database provided through the U.S. Department of Homeland Security (DHS) and Social Security Administration (SSA). As required by law, we will provide the SSA and, if necessary, the DHS, with information from each new employeeโs Form I-9 to confirm work authorization for those residing in the United States.\n\nAdditional information about E-Verify can be found here.\n\n#LI-Remote\n\nWE DO NOT ACCEPT APPLICATIONS FROM RECRUITERS.\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, Cloud, API, Engineer and Backend jobs that are similar:\n\n
$70,000 — $105,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSeattle, Washington, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Phaidra is hiring a Remote Senior Software Engineer
Who You Are\n\nWe are looking for a driven Software Engineer (MLOps) to be a part of our growing AI Platform team. You are bold and creative, and have deep empathy for customers who may not be tech-savvy. You will design and implement significant parts of the code base and will have the opportunity to make an immediate impact with your work and guide the product and team as we grow.\n\nYou are curious and like to understand technologies and their tradeoffs in depth - providing technical guidance to the team and peers as and when required. Leading by example, you have accumulated a wealth of insights and experiences from your hands-on involvement in the field, and you are committed to rolling up your sleeves and getting work done. You like joining and supporting other engineers in their work to learn from them as well as letting them benefit from your expertise and experience.\n\nYou have the motivation and skills to identify technical product needs, initiate projects and owning their delivery, including the involvement of engineering peers as needed. You are comfortable with challenging the status quo respectfully to drive and deliver technical excellence in the team.\n\n**We are seeking a team member located within one of the following areas: USA/Canada/UK\nResponsibilities\n\nThe AI Platform team you are joining is responsible for building the core platform that powers model training, inference and decision making in our products. Furthermore the team owns MLOps and the services hosting our AI capabilities. Productionizing results from Research, as well as extending our systems and providing support according to our customer needs fall into team responsibilities as well. You will join this team as an experienced engineer with a focus on MLOps solutions to grow our expertise in that area, but also contribute as a software engineer more widely in the team.\n\nAs an organization, we strongly believe in expertise across the stack. As such, you will experience flavors of Machine Learning, Software Engineering, Distributed Systems, MLOps and DevOps.\n\nIn particular, you will:\n\n\n* Design, build and lead the MLOps initiatives and vision for the AI Platform to strengthen automation, orchestration, versioning, observability, monitoring and collaboration for the platform.\n\n* Build and design scalable components for the AI Platform to allow high throughput training and inference for RL agents doing realtime inference for autonomous control of industrial systems.\n\n* Contribute to the design and implementation of the product backend by writing REST & gRPC API services and scalable event-driven backend applications.\n\n* Design clear, extensible software interfaces for the team's customers and maintain a high release quality bar.\n\n* Design and optimize data storage & retrieval mechanisms for high throughput, security & ease of access.\n\n* Perform DevOps duties of CI/CD, Release & Deployment management.\n\n* Be a part of our global production oncall team and, own & operate your services in production, meeting Phaidraโs high bar for operational excellence.\n\n* Lead cross-functional initiatives collaborating with engineers, product managers and TPM across teams.\n\n* Mentor your peers and be a technical role-model in the team.\n\n\n\nOnboarding\n\nIn your first 30 daysโฆ\n\n\n* You will be immersed in an onboarding program that introduces you to Phaidra and our product.\n\n* You will spend time in the Engineering org, learning how the teams operate, interact, and approach problems.\n\n* You will read various parts of our handbook and familiarize yourself with the documentation culture at Phaidra.\n\n* You will set up your development environment and start working on an onboarding exercise that will introduce you to various parts of our code base.\n\n* You will learn about how we use agile and be able to navigate our sprint boards and backlogs.\n\n* You will learn about various team standards and development & release processes.\n\n* You will start to learn about our system architecture and infrastructure.\n\n* You will start picking up few good โfirst-tasksโ to get yourself accustomed to the end to end release flow.\n\n\n\n\nIn your first 60 daysโฆ\n\n\n* You will get a solid understanding of what Phaidra does and how we do it.\n\n* You will meet with team members across Phaidra and started building relationships that will help you be successful at your job.\n\n* You will complete the onboarding exercise and will be on your way to completing your first production task.\n\n* You will take ownership for the MLOps work on the team, identify gaps and propose roadmap items on the topic.\n\n\n\n\nIn your first 90 daysโฆ\n\n\n* You will be fully integrated in the team and with team members across the company.\n\n* You will have a more in-depth understanding of our system architecture and infrastructure.\n\n* You will complete your first on-call experience helping monitor and improve our production environments.\n\n* You will become an expert with our tooling.\n\n* You will start to contribute to knowledge sharing throughout Phaidra and the team.\n\n* You will take proactively drive MLOps topics in the team and represent it technically throughout the company.\n\n\n\nKey Qualifications\n\n\n* 7+ years of work experience.\n\n* Bachelors or Masters in Computer Science, or equivalent experience.\n\n* Strong experience on designing and implementing MLOps solutions for AI production systems\n\n* Expertise with production Software Engineering - relational and non-relational data modelling, micro-services, understanding of event driven systems, etc.\n\n* Strong experience building large scale multi-tenant systems with high availability, fault tolerance, performance tuning, monitoring, and statistics/metrics collection.\n\n* Strong expertise in Python and Cloud environments\n\n* Good grasp of Machine Learning (especially Deep Learning) fundamentals.\n\n* Ability to collaborate and communicate effectively in an all-remote setting\n\n* Doing your work with curiosity, ownership, transparency & directness, outcome orientation, and customer empathy.\n\n\n\nBonus\n\n\n* Experience as a service owner of a realtime production system - operating & monitoring services in production, including using observability tooling such as Prometheus, Grafana, Tempo or equivalent offerings and incident management.\n\n* Experience with building applications that can be deployed in cloud, hybrid or on prem environments\n\n* Exposure to Reinforcement Learning\n\n\n\nOur Stack\n\n\n* Languages - (Backend) Python, Go; (Frontend) JavaScript/TypeScript, React; Customer SDK & Clients - C# .NET\n\n* PyTorch\n\n* Cypress\n\n* Docker, Kubernetes, Terraform & Kapitan\n\n* Gitlab CI, ArgoCD, Atlantis, Vercel\n\n* GCP - GKE, PubSub, CloudSQL, BigTable, Postgres, etc.\n\n* Ray.io\n\n* REST & gRPC micro-services\n\n* Poetry, Pantsbuild\n\n\n\nGeneral Interview Process\n\nAll of our interviews are held via Google Meet, and an active camera connection is required.\n\n* Initial screening interview with a People Operations team member (30 minutes): The purpose of this interview is to meet you, learn more about your background, and discuss what you are looking for in a new position.\n\n* Hiring manager interview (30 minutes): The purpose of this meeting is for you to get to know the manager for the role. This chat will mainly focus on your previous experience and technical background. You can expect to talk about projects that you have worked on in the past and ask any questions about the team & role.\n\n* Technical Interview 1 (60 minutes): The purpose of this interview is to assess your skills in Machine Learning and related mathematics.\n\n* Technical Interview 2 (90 minutes): In this interview, we will go over a real world MLOps problem. You can expect to draw architecture diagrams using boxes & arrows in your browser. We will talk about system design, scalability and monitoring.\n\n* Meeting with VP of Engineering (30 minutes): This interview is a combination of technical and cultural fit assessment. You will cover the technical experience and the skills that you brinand have an opportunity to ask any questions about the teamโs culture or vision.\n\n* Culture fit interview with Phaidraโs co-founders (30 minutes): This interview focuses on alignment with Phaidraโs values\n\n\nBase Salary\n\nUS Residents: $115,200-$208,800/year\n\nUK Residents: ยฃ96,400-ยฃ144,000/year\n\nThis position will also include equity.\n\nThese are best faith estimates of the base salary range for this position. Multiple factors such as experience, education, level, and location are taken into account when determining compensation. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, Python, DevOps, Cloud, API, Senior, Engineer and Backend jobs that are similar:\n\n
$65,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSeattle, Washington, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nWe're looking for a savvy and experienced Senior Data Engineer to join the Data Platform Engineering team at Hims. As a Senior Data Engineer, you will work with the analytics engineers, product managers, engineers, security, DevOps, analytics, and machine learning teams to build a data platform that backs the self-service analytics, machine learning models, and data products serving over a million Hims & Hers users.\nYou Will:\n\n\n* Architect and develop data pipelines to optimize performance, quality, and scalability\n\n* Build, maintain & operate scalable, performant, and containerized infrastructure required for optimal extraction, transformation, and loading of data from various data sources\n\n* Design, develop, and own robust, scalable data processing and data integration pipelines using Python, dbt, Kafka, Airflow, PySpark, SparkSQL, and REST API endpoints to ingest data from various external data sources to Data Lake\n\n* Develop testing frameworks and monitoring to improve data quality, observability, pipeline reliability, and performance\n\n* Orchestrate sophisticated data flow patterns across a variety of disparate tooling\n\n* Support analytics engineers, data analysts, and business partners in building tools and data marts that enable self-service analytics\n\n* Partner with the rest of the Data Platform team to set best practices and ensure the execution of them\n\n* Partner with the analytics engineers to ensure the performance and reliability of our data sources\n\n* Partner with machine learning engineers to deploy predictive models\n\n* Partner with the legal and security teams to build frameworks and implement data compliance and security policies\n\n* Partner with DevOps to build IaC and CI/CD pipelines\n\n* Support code versioning and code deployments for data Pipelines\n\n\n\nYou Have:\n\n\n* 8+ years of professional experience designing, creating and maintaining scalable data pipelines using Python, API calls, SQL, and scripting languages\n\n* Demonstrated experience writing clean, efficient & well-documented Python code and are willing to become effective in other languages as needed\n\n* Demonstrated experience writing complex, highly optimized SQL queries across large data sets\n\n* Experience with cloud technologies such as AWS and/or Google Cloud Platform\n\n* Experience with Databricks platform\n\n* Experience with IaC technologies like Terraform\n\n* Experience with data warehouses like BigQuery, Databricks, Snowflake, and Postgres\n\n* Experience building event streaming pipelines using Kafka/Confluent Kafka\n\n* Experience with modern data stack like Airflow/Astronomer, Databricks, dbt, Fivetran, Confluent, Tableau/Looker\n\n* Experience with containers and container orchestration tools such as Docker or Kubernetes\n\n* Experience with Machine Learning & MLOps\n\n* Experience with CI/CD (Jenkins, GitHub Actions, Circle CI)\n\n* Thorough understanding of SDLC and Agile frameworks\n\n* Project management skills and a demonstrated ability to work autonomously\n\n\n\nNice to Have:\n\n\n* Experience building data models using dbt\n\n* Experience with Javascript and event tracking tools like GTM\n\n* Experience designing and developing systems with desired SLAs and data quality metrics\n\n* Experience with microservice architecture\n\n* Experience architecting an enterprise-grade data platform\n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Python, Docker, Testing, DevOps, JavaScript, Cloud, API, Senior, Legal and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nSan Francisco, California, United States
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
Orbital Insight is hiring a Remote Sr Devops Engineer Public Sector
\nOur mission at Orbital Insight is to understand what weโre doing on and to the Earth. We do this through our cloud-based SaaS platform, Orbital Insight Terrascope and on-premise offering, Terrabox, by ingesting geospatial data at massive scales, then applying state-of-art AI/ML and Data Science algorithms at scale. \n\n\nIn order to achieve this, the cloud infrastructure engineering team is motivated to provide a robust infrastructure layer between the platform and the cloud providers. This software layer is centered around Kubernetes to provide cloud agnostic feasibility and requires deployability on customersโ cloud environments.\n\n\nOur work spans multiple areas including cloud agnostic infrastructure, networking, orchestration, distributed storage, and online/streaming processing. If you enjoy applying computer science fundamentals to real-world challenges and building scalable systems, you will fit right in.\n\n\nIn order to take our government business to the next level we are looking to hire a DevOps Engineer to join our Public Sector team. This DevOps Engineer will work on developing, adapting, and extending our commercial software products to fit the needs of our government customers.\nAt Orbital Insight, we work in cross-functional Agile teams, exploring how geospatial data, data science, AI/deep learning, computer vision, and intimacy with user needs can create entirely new products that give novel insights about what we are doing on and to the earth. Our pioneering products help people answer questions that cannot be answered today.\n\n\nWe value experienced engineers who already have a breadth of experience in multiple areas -- databases, devops, machine learning, API design, and more -- and are eager to learn new areas and new technologies.\nIf all this sounds interesting to you, weโd love to meet you.\n\n\nThe position is remote with as needed to travel to customer sites in the Washington DC area. Occasional travel to corporate headquarters in Palo Alto, California.\n\n\nThis position requires active TS/SCI(DOD) clearance\n\n\n\nResponsibilities\n* Lead deployment and maintenance of our flagship product Terrascope, from GovCloud to JWICS You are the primary engineer for these initiatives\n* Ability to lead a cross functional team for a government engagement including facilitating design and implementation, as well as project management, to meet contractual deliverables\n* Design and develop the software layer between the platform and the cloud providers\n* Understanding the requirements for cloud provider agnostic and air-gapped environments\n* Be responsible for the runtime infrastructure under our production system and all developer resources\n* Automate packaging and testing for releases and bootstrapping\n* Attending technology conferences that will help support learning, as well as, bringing ideas from greater community that can further improve our solutions\n\n\n\nMandatory Qualifications\n* 5 Years Minimum Experience as DevOps Engineer\n* 3 years experience with Kubernetes to deploy, scale and manage containers with configuration using Helm charts\n* Comfortable at command line and working within Linux operating system (preferably RHEL)\n* Experience working with Docker or other Container technologies\n* Experience working with cloud providers such as AWS, GCP, or Azure . Configuring networking (DNS, routing, load balancing) will be a good example\n* Command of a scripting language such as Python or Bash, as well as Git\n* Proficiency and experience with infrastructure as code (IaC) tools, such as Terraform\n\n\n\nPreferred Qualifications\n* Experience with JWICS integration (PKI, NPE Certs)\n* Experience with air-gapped system / on-premise deployment\n* Experience with databases and message queues\n* Experience with Cloud/Kubernetes security \n* Computer science, electrical engineering degree or related experience\n\n\n\n\n$140,000 - $210,000 a yearSalary range includes annual base salary only\n\nAt Orbital Insight, we believe that a diverse workforce that reflects the diversity of our planet is the way to achieve our mission: to understand what is happening on and to the Earth. Orbital Insight is an Equal Employment Opportunity and Affirmative Action Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status. We do not accept unsolicited headhunter and agency resumes and will not pay any third-party agency or company that does not have a signed agreement with Orbital Insight. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, SaaS, Python, Docker, Testing, DevOps, Cloud, API, Engineer and Linux jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nArlington, VA
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.
\nRestaurant365 is a SaaS company disrupting the restaurant industry! Our cloud-based platform provides a unique, centralized solution for accounting and back-office operations for restaurants. Restaurant365โs culture is focused on empowering team members to produce top-notch results while elevating their skills. Weโre constantly evolving and improving to make sure we are and always will be โBest in Classโ ... and we want that for you too!\n\n\nRestaurant365 is looking for an experienced Data Engineer to join our data warehouse team thatenables the flow of information and analytics across the company. The Data Engineer will participate in the engineering of our enterprise data lake, data warehouse, and analytic solutions. This is a key role on a highly visible team that will partner across the organization with business and technical stakeholders to create the objects and data pipelines used for insights, analysis, executive reporting, and machine learning. You will have the exciting opportunity to shape and grow with a high performingteam and the modern data foundation that enables the data-driven culture to fuel the companyโs growth. \n\n\n\nHow you'll add value: \n* Participate in the overall architecture, engineering, and operations of a modern data warehouse and analytics platforms. \n* Design and develop the objects in the Data Lake and EDW that serve as core building blocks for the semantic layer and datasets used for reporting and analytics across the enterprise. \n* Develop data pipelines, transformations (ETL/ELT), orchestration, and job controls using repeatable software development processes, quality assurance, release management, and monitoring capabilities. \n* Partner with internal business and technology stakeholders to understand their needs and then design, build and monitor pipelines that meet the companyโs growing business needs. \n* Look for opportunities for continuous improvements that automate workflows, reduce manual processes, reduce operational costs, uphold SLAs, and ensure scalability. \n* Use an automated observability framework for ensuring the reliability of data quality, data integrity, and master data management. \n* Partner closely with peers in Product, Engineering, Enterprise Technology, and InfoSec teams on the shared enterprise needs of a data lake, data warehouse, semantic layer, transformation tools, BI tools, and machine learning. \n* Partner closely with peers in Business Intelligence, Data Science, and SMEs in partnering business units o translate analytics and business requirements into SQL and data structures \n* Responsible for ensuring platforms, products, and services are delivered with operational excellence and rigorous adherence to ITSM process and InfoSec policies. \n* Adopt and follow sound Agile practices for the delivery of data engineering and analytics solutions. \n* Create documentation for reference, process, data products, and data infrastructure \n* Embrace ambiguity and other duties as assigned. \n\n\n\nWhat you'll need to be successful in this role: \n* 3-5 years of engineering experience in enterprise data warehousing, data engineering, business intelligence, and delivering analytics solutions \n* 1-2 years of SaaS industry experience required \n* Deep understanding of current technologies and design patterns for data warehousing, data pipelines, data modeling, analytics, visualization, and machine learning (e.g. Kimball methodology) \n* Solid understanding of modern distributed data architectures, data pipelines, API pub/sub services \n* Experience engineering for SLA-driven data operations with responsibility for uptime, delivery, consistency, scalability, and continuous improvement of data infrastructure \n* Ability to understand and translate business requirements into data/analytic solutions \n* Extensive experience with Agile development methodologies \n* Prior experience with at least one: Snowflake, Big Query, Synapse, Data bricks, or Redshift \n* Highly proficient in both SQL and Python for data manipulation and assembly of Airflow DAGโs. \n* Experience with cloud administration and DevOps best practices on AWS and GCP and/or general cloud architecture best practices, with accountability cloud cost management \n* Strong interpersonal, leadership and communication skills, with the ability to relate technical solutions to business terminology and goals \n* Ability to work independently in a remote culture and across many time zones and outsourced partners, likely CT or ET \n\n\n\nR365 Team Member Benefits & Compensation\n* This position has a salary range of $94K-$130K. The above range represents the expected salary range for this position. The actual salary may vary based upon several factors, including, but not limited to, relevant skills/experience, time in the role, business line, and geographic location. Restaurant365 focuses on equitable pay for our team and aims for transparency with our pay practices. \n* Comprehensive medical benefits, 100% paid for employee\n* 401k + matching\n* Equity Option Grant\n* Unlimited PTO + Company holidays\n* Wellness initiatives\n\n\n#BI-Remote\n\n\n$90,000 - $130,000 a year\n\nR365 is an Equal Opportunity Employer and we encourage all forward-thinkers who embrace change and possess a positive attitude to apply. \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Design, SaaS, InfoSec, Python, Accounting, DevOps, Cloud, API and Engineer jobs that are similar:\n\n
$60,000 — $110,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
\n\n#Location\nRemote
๐ Please reference you found the job on Remote OK, this helps us get more companies to post here, thanks!
When applying for jobs, you should NEVER have to pay to apply. You should also NEVER have to pay to buy equipment which they then pay you back for later. Also never pay for trainings you have to do. Those are scams! NEVER PAY FOR ANYTHING! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. A good idea is to check the domain name for the site/email and see if it's the actual company's main domain name. Scams in remote work are rampant, be careful! Read more to avoid scams. When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.