Open Startup
RSS
API
Remote HealthPost a job

find a remote job
work from anywhere

Get a  email of all new Remote Docker + Python Jobs

Subscribe
×

👉 Hiring for a Remote Docker + Python position?

Post a job
on the 🏆 #1 Remote Jobs board

Remote Health by SafetyWing


Global health insurance for freelancers & remote workers

Remote Health by SafetyWing


Global health insurance for freelancers & remote workers

Vizibl


This position is a Remote OK original posting verified
🇪🇺 EU-only

Python Software Engineer


Vizibl

🇪🇺 EU-onlyOriginally posted on Remote OK

flask

 

postgresql

 

flask

 

postgresql

 

fast api


Vizibl is hiring a Remote Python Software Engineer

**At Vizibl, we’re on a mission to help every company work better, together. We want to help all companies make a difference in the world by revolutionising the way they work together, empowering them to reach their full potential.**\n\nWe’re off to a great start too. Teams in some of the world’s largest enterprise companies are already collaborating with their suppliers through Vizibl and transforming the way they work to drive innovation together.\n\nWe welcome people from all backgrounds who seek the opportunity to help build a future where every company sees the benefit of working openly and collaboratively. If you have the passion, curiosity and collaborative spirit, work with us, and let’s help every company work better, together.\n\nVizibl is a growing SaaS platform used by the world’s largest organisations to help change the way they work. Our unique blend of Enterprise know-how coupled with our beautiful and usable products is one of the things our customers love about us.\n\nVizibl is looking for a talented Back End Engineer that’s passionate about building scalable, maintainable, performant backend services that put security first without compromising on our commitment to openness. As Vizibl grows, so do our ambitions for the future of our backend services, which is why this is a great opportunity for the right person to join a talented team to drive exciting new projects that will help change the way the world’s largest companies work with each other.\n\nThis person will work across our backend services to help maintain our REST api, develop solutions to new problems, be involved in the design and architecture of the platform and work collaboratively to support the growth of the platform. The ideal candidate is a self-motivated person that cares deeply about building excellent products. They don’t settle for OK and have a desire to integrate themselves deeply into the working of the business.\n\nAs this is a fully remote position we'll be looking for strong communication skills and the ability to motivate yourself and your team to work independently.\n\nIf you’re interested in building products that challenge the status quo in the enterprise space and you enjoy an abundance of autonomy with just the right amount of alignment then we’d love to hear from you.\n\n**Open to Everyone**\n\nVizibl is proud to be an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.\n\n**Working for Vizibl you will..**\n* Have a huge amount of autonomy\n* Work remotely\n* Work with cutting edge technologies\n* Manage and support applications in production on our Kubernetes cluster\n* Contribute to the design and architecture and development processes for a system used by the world’s largest enterprise organisations\n* Be involved in the planning and development of solutions\n* Be an ambassador for our product values\n* Work with an amazing team of people spread out across Europe\n* Contribute to a positive and empowering company culture\n* Help to build and improve a platform used by some of the worlds biggest organisations \n* Get support to grow and develop in your career\n\n**What You’ll Need**\n* Have experience working in a professional engineering team\n* 3+ years of Python experience (strong candidates with experience in another language may be considered)\n* Experience building production-ready REST APIs\n* Strong skills in information security architecture and security best practices\n* Understanding of data modelling and querying in (Postgres) SQL\n* Experience with Git\n* English fluency and excellent communication skills\n* Experience with TDD/BDD methodologies\n* A desire to learn and improve\n* Be organised and self motivated\n* Be a great team player. Our product squads work very closely together to build solutions \n\n**We’ll be impressed if**\n* DevOps experience with Docker, Kubernetes, Google Cloud, etc\n* You have experience working in an agile team\n* You have experience working in a remote team\n* You have experience with queuing systems like Celery, Kafka\n* You have worked on products that have been subject to regular security audits\n* You write about back-end technologies\n* You have frontend Javascript experience\n* You have experience architecting complex systems\n* You have experience scaling web applications\n* You’re familiar with the enterprise project management space\n* You’ve integrated with large corporate IT environments before\n\n**Benefits**\n* Huge amounts of autonomy\n* Flexible working\n* Work from anywhere\n* Competitive compensation packages\n* Options in a growing SaaS business\n* Work with a great team\n* Great carrer development opportunities\n* Annual retreats \n\n#Salary and compensation\n$60,000 — $80,000/year\n\n\n#Location\n🇪🇺 EU-only


See more jobs at Vizibl

# How do you apply?\n\n To apply please follow the link below.
Apply for this job

Argyle

 This job is getting a relatively high amount of applications currently (13% of viewers clicked Apply)

This position is a Remote OK original posting verified
Europe, South America, North America

Senior Backend Engineerapi  This job is getting a relatively high amount of applications currently (13% of viewers clicked Apply)


Argyle

Europe, South America, North AmericaOriginally posted on Remote OK

api

 

golang

 

api

 

golang

 

kubernetes


Argyle is hiring a Remote Senior Backend Engineerapi

**Senior Backend Engineer - API\nRemote**\n\nArgyle is a remote-first, Series A fast-growing tech startup that has reimagined how we can use employment data.\n\nRenting an apartment, buying a car, refinancing a home, applying for a loan. The first question that they will ask you is, "how do you earn your money?" Wouldn’t you think that information foundational to our society would be simple to manage, transfer and control? Well, it’s not!\n\nArgyle provides businesses with a single global access point to employment data. Any company can process work verifications, gain real-time transparency into earnings and view worker profile details.\n\nWe are a fun and passionate group of people, all working remotely across 19 different countries and counting. We are now looking for Senior Backend Engineers to come and join our team.\n\n**What will you do?**\n\n- Experience and a big passion for API design, scalability, performance and end-to-end ownership\n- Design, build, and maintain APIs, services, and systems across Argyle's engineering teams\n- Debug production issues across services and multiple levels of the stack\n- Work with engineers across the company to build new features at large-scale\n- Managing k8s clusters with GitOps driven approach\n- Operating databases with large datasets\n- Concurrent systems programming\n\n**** What are we looking for ****\n\n- Enjoy and have experience building APIs\n- Think about systems and services and write high-quality code. We work mostly in Python & Go. However, languages can be learned: we care much more about your general engineering skill than knowledge of a particular language or framework.\n- Hold yourself and others to a high bar when working with production systems\n- Take pride in working on projects to successful completion involving a wide variety of technologies and systems.\n- Thrive in a collaborative environment involving different stakeholders and subject matter experts\n\n**Why Argyle?**\n\n- Remote first company\n- International environment\n- Flexible working hours\n- Stock Options\n- Flexible vacation leave\n- $1000 after a month of employment to set up your home office.\n- MacBook\n\nArgyle embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe the more inclusive we are, the better our company will be. \n\n#Salary and compensation\n$60,000 — $120,000/year\n\n\n#Location\nEurope, South America, North America


See more jobs at Argyle

Previous Remote Docker + Python Jobs

Argyle


This position is a Remote OK original posting verified closed
Europe, South America, Singapore, Taiwan, Thailand, Philippines

Software Engineer


Argyle

Europe, South America, Singapore, Taiwan, Thailand, PhilippinesOriginally posted on Remote OK

web crawling

 

reverse engineering

 

web crawling

 

reverse engineering

 

beautifulsoup

This job post is closed and the position is probably filled. Please do not apply.
**Software Engineer (Crawling/Reverse Engineering)\nRemote - Europe/South America/Singapore/Taiwan/Thailand/Philippines**\n\n****$40k – $80k****\n\nArgyle is a remote-first, Series A fast-growing tech startup that has reimagined how we can use employment data.\n\nRenting an apartment, buying a car, refinancing a home, applying for a loan. The first question that they will ask you is, "how do you earn your money?" Wouldn’t you think that information foundational to our society would be simple to manage, transfer and control? Well, it’s not!\n\nArgyle provides businesses with a single global access point to employment data. Any company can process work verifications, gain real-time transparency into earnings and view worker profile details.\n\nWe are a fun and passionate group of people, all working remotely across 19 different countries and counting. We are now looking for multiple Scanner Engineers (Crawling/Reverse Engineering) to come and join our global team.\n\nYou will join a team of exceptionally talented engineers constantly looking for improvements and innovative ways to meet our business needs. Scrappers (we call them Scanners) are at the core of our business. It means you will constantly be fighting and innovating, owning and taking bold decisions.\n\n**What will you do?**\n\n- You will create, own and maintain Scanners\n- You will be contributing to general improvements such as shared libraries & frameworks\n- You will be closely working and communicating across different teams\n\n**Our stack**\n\nPython as our main language. Python libraries we use: celery, pydantic, playwright, puppeteer, beautifulSoup, asyncio, httpx, pydash, mypy, pytest, poetry, pyenv, poppler, PdfMiner. We run Docker, Kubernetes, GCP, Github, ArgoCD.\n\n**Requirements**\n\n- Experience in development of robust web scrapers\n- Reverse engineering knowledge of Android/iOS or JS/WebApps\n- Knowledge of bot and captcha bypass mitigation tactics\n- Python coding experience preferred\n- Big bonus points if you are familiar with Android/iOS device verification frameworks (SafetyNet Attestation/DeviceCheck) and ways to bypass them\n- Should be able to think and act fast in a startup environment\n- Shouldn't be scared by a bit of chaos and rapid changes\n- Taking ownership of your workload and commitments\n- We are big fans of not pointing at things but getting them fixed\n\n**Why Argyle?**\n\n- Remote first company\n- International environment\n- Flexible working hours\n- Stock Options\n- Flexible vacation leave\n- $1000 after a month of employment to set up your home office.\n- MacBook\n\nArgyle embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe the more inclusive we are, the better our company will be. \n\n#Salary and compensation\n$40,000 — $80,000/year\n\n\n#Location\nEurope, South America, Singapore, Taiwan, Thailand, Philippines


See more jobs at Argyle

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Kontist

 This job is getting a relatively high amount of applications currently (11% of viewers clicked Apply)

This position is a Remote OK original posting verified closed
🇪🇺 EU-only

DevOps Engineer  This job is getting a relatively high amount of applications currently (11% of viewers clicked Apply)


Kontist

🇪🇺 EU-onlyOriginally posted on Remote OK

aws

 

infrastructure

 

cd

 

aws

 

infrastructure

 

cd

 

ci

This job post is closed and the position is probably filled. Please do not apply.
### About the job\n* Take full ownership of our cloud infrastructure on AWS \n* Continuously improve the reliability, stability, and performance of the infrastructure\n* Ensure infrastructure security and perform routine security audit \n* Build out robust monitoring and alerting systems\n\n\n### Your profile\n* Agile experience and mindset \n* Confident, assertive and communicative \n* Intimate knowledge of AWS ecosystem\n* Experience in managing infrastructure-as-code\n* Configuration management, CD/CI \n* CloudFormation and/or Terraform \n* Virtualization technology: docker, kubernetes \n* Strong programming skills in Shell, Python, or Ruby \n* Proficiency in English\n\n\n### What’s in it for you\n* Open to both full-time position as well as 100% remote\n* A highly motivated and ambitious working environment in a cohesive, fast-growing team \n* A multicultural, diverse, and inclusive community where you can grow personally and professionally, including possibilities to move internally within the company \n* Lovely sunny and green office in central Berlin with office dogs \n* Flexible, trust-based working hours \n* Personal coaching \n* Regular team events and company off-sites \n* Weekly German and English classes \n* Sponsored daily on-site lunches \n* Urban Sports Club Membership\n\n\n### About Kontist\nKontist is a Berlin, Germany-based financial services provider for freelancers with about 100 employees. We just announced the completion of a €25 million ($29.6M) Series B funding round in March 2021.\n\n\n#### *Please do not apply if you are not able to work during our core working hours (10:00 - 16:00 CEST). Occasional visits to Berlin (Germany) might be required if you choose to work remotely.*\n\n\n#Location\n🇪🇺 EU-only


See more jobs at Kontist

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Borg Collective


This position is a Remote OK original posting closed
🌏 Worldwide

Tech Lead


Borg Collective

🌏 WorldwideOriginally posted on Remote OK

tech lead

 

javascript

 

tech lead

 

javascript

 
This job post is closed and the position is probably filled. Please do not apply.
# About us\n### Our mission\nOur company's mission is to enable people to trust each other at scale.\n### Problem we're solving\nSocial media is broken because users cannot easily move between platforms. Each platform is a silo; they have their own stacks, which are largely incompatible with each other.\n### Our Solution\nAn interoperable social stack. Layers instead of silos. Users can easily move between platforms without losing their followers.\n### What we're building\nWe are building the two bottom layers of that stack, i.e. a *universal social graph* and a *decentralized ranking algorithm*. We are also building the first UI for this new social stack – [hive.one](http://hive.one).\n### Work setup\nWe are a small, VC-funded startup. We are a remote-first team. Most of the team is based in Europe (Berlin, London, Milan). You can make your own hours, but everybody is expected to be online during office hours in CET. We try to meet in person and work together for several days at least every 3 months. Other than that the company ‘lives’ in Slack, Notion and other tools enabling effective communication.\nThe job is full time permanent. If you work from Germany, you will get an employment contract. If you work from somewhere else in the world, you will work as a freelancer, and need to fulfill the legal requirements for freelancing in your country. We are also happy to sponsor a work visa if you would like to move to Germany.\nIf you work from Berlin, you can work from our Berlin office, located in Mitte.\n## About This Role\nWe are looking for a tech lead to take charge of a growing team of 2 (soon 3) software engineers\nThis is a young and motivated team ripe for a strong tech lead to take ownership and build strong workflows and processes with plans for fast growth in the next 12-18 months\nYou will lead our team which is responsible for:\n- Developing most of our core features (along side our algorithm team)\n- Managing our infrastructure\n- Managing our API's\n### Our Tech Stack:\n- Micro-services based architecture deployed on AWS\n- Docker\n- Protocol Buffers and gRPC\n- Python\n- Javascript\n- ArangoDB\nIt's okay if you don't have experience with everything here, we would expect you to have enough experience to pick things up relatively quickly\n## Responsibilities\n- You'll be responsible for week-to-week planning of your team and keeping each member accountable\n- You'll be working with our CEO and the head of the Algorithm team to translate high-level ideas into projects/tasks\n- As a tech lead, part of your role is to be a people manager - It's vital you dedicate time to managing your team members\n- You’ll be expected to drive technical direction in projects and assure they meet scalability and robustness requirements\n- You'll be hands-on on projects, getting deep into code and be looked upon as the senior authority in software development tasks, projects, Etc\n- You'll be responsible for hiring for your team\n- You'll be responsible for communicating and planning between other teams\n## Requirements\n- Wide experience building scalable applications\n- Experience with most of the tools in our stack - It's okay if you haven't used everything, we will teach you\n- Experience leading remote development teams in the past - This could be at a job or working on an open-source project\n- Excellent written and verbal communication\n- You enjoy writing documentation and understand why it's valuable\n- A self starter - You can come up with ideas and implement them yourself\n- You are a bar raiser - If you see others around you being lazy or slacking, instead of joining in you push back\n- Your timezone is CET +/- 4\n### Great to have\n- Mathematical background\n- Interest in the Crypto space\n# Things we care about\n- You use precise language and you insist that others do too\n- You care about the tiny details (Performance, A11Y etc.) \n\n#Salary and compensation\n$60,000 — $80,000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at Borg Collective

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
Doximity is transforming the healthcare industry. Our mission is to help doctors be more productive, informed, and connected. Achieving this vision requires a multitude of disciplines, expertises and perspective. One of our core pillars have always been data. As a software engineer focused on the infrastructure aspect of our data stack you will work on improving healthcare by advancing our data capabilities, best practices and systems. Our team brings a diverse set of technical and cultural backgrounds and we like to think pragmatically in choosing the tools most appropriate for the job at hand.\n\n**About Us**\n\nOur data teams schedule over 1000 Python pipelines and over 350 Spark pipelines every 24 hours, resulting in over 5000 data processing tasks each day. Additionally, our data endeavours leverage datasets ranging in size from a few hundred rows to a few hundred billion rows. The Doximity data teams rely heavily on Python3, Airflow, Spark, MySQL, and Snowflake. To support this large undertaking, the data infrastructure team uses AWS, Terraform, and Docker to manage a high-performing and horizontally scalable data stack. The data infrastructure team is responsible for enabling and empowering the data analysts, machine learning engineers and data engineers at Doximity. We provide and evole a foundation on which to build, and ensure that incidental complexites melt into our abstractions. Doximity has worked as a distributed team for a long time; pre-pandemic, Doximity was already about 65% distributed.\n\nFind out more information on the Doximity engineering blog\n* Our [company core values](https://work.doximity.com/)\n* Our [recruiting process](https://technology.doximity.com/articles/engineering-recruitment-process-doximity)\n* Our [product development cycle](https://technology.doximity.com/articles/mofo-driven-product-development)\n* Our [on-boarding & mentorship process](https://technology.doximity.com/articles/software-engineering-on-boarding-at-doximity)\n\n**Here's How You Will Make an Impact**\n\nAs a data infrastructure engineer you will work with the rest of the data infrastructure team to design, architect, implement, and support data infrastructure, systems, and processes impacting all other data teams at Doximity. You will solidify our CI/CD pipelines, reduce production impacting issues and improve monitoring and logging. You will support and train data analysts, machine learning engineers, and data engineers on new or improved data infrastructure systems and processes. A key responsibility is to encourage data best-practices through code by continuing the development of our internal data frameworks and libraries. Also, it is your responsibility to identify and address performance, scaling, or resource issues before they impact our product. You will spearhead, plan, and carry out the implementation of solutions while self-managing your time and focus.\n\n**About you**\n\n* You have professional data engineering or operations experience with a focus on data infrastructure\n* You are fluent in Python and SQL, and feel at home in a remote Linux server session\n* You have operational experience supporting data stacks through tools like Terraform, Docker, and continuous integration through tools like CircleCI\n* You are foremost an engineer, making you passionate about high code quality, automated testing, and engineering best practices\n* You have the ability to self-manage, prioritize, and deliver functional solutions\n* You possess advanced knowledge of Linux, Git, and AWS (EMR, IAM, VPC, ECS, S3, RDS Aurora, Route53) in a multi-account environment\n* You agree that concise and effective written and verbal communication is a must for a successful team\n\n**Benefits & Perks**\n\n* Generous time off policy\n* Comprehensive benefits including medical, vision, dental, generous paternity and maternity leave, Life/ADD, 401k, flex spending accounts, commuter benefits, equipment budget, and continuous education budget\n* Pre-IPO stock incentives\n* and much more! For a full list, see our career page\n\n**More info on Doximity**\n\nWe're thrilled to be named the Fastest Growing Company in the Bay Area, and one of Fast Company's Most Innovative Companies. Joining Doximity means being part of an incredibly talented and humble team. We work on amazing products that over 70% of US doctors (and over one million healthcare professionals) use to make their busy lives a little easier. We're driven by the goal of improving inefficiencies in our $3.5 trillion U.S. healthcare system and love creating technology that has a real, meaningful impact on people's lives. To learn more about our team, culture, and users, check out our careers page, company blog, and engineering blog. We're growing steadily, and there's plenty of opportunities for you to make an impact.\n\n*Doximity is proud to be an equal opportunity employer and committed to providing employment opportunities regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, pregnancy, childbirth and breastfeeding, age, sexual orientation, military or veteran status, or any other protected classification. We also consider qualified applicants with criminal histories, consistent with applicable federal, state, and local law.*\n\n\n\n\n#Location\n🇺🇸 US-only


See more jobs at Doximity

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\n→ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) ← (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected])\n\n#Location\n🌏 Worldwide


See more jobs at Splitgraph

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

DataKitchen


This position is a Remote OK original posting verified closed
🌏 Worldwide

Manager Toolchain Software Engineering


DataKitchen

🌏 WorldwideOriginally posted on Remote OK

aws

 

aws

 

azure

This job post is closed and the position is probably filled. Please do not apply.
Job description\n\nWe are seeking a world-class Manager of Toolchain Software Engineering, whose charter is to create a technical design and build team that can rapidly integrate dozens of tools in DataKitchen’s DataOps platform. There are hundreds of tools that our customers use to do their day to day work: data science, data engineering, data visualization, and governance. We have integrated many of those tools, but our customers are better served by starting with example ‘content.’ And for us, that content is Recipes/Pipelines with working tool integrations across the varied toolchains/clouds that our customers and prospects use to do data analytics. We want our customers to start from example content and be doing DataOps on their platform in less than 10 minutes.\n\nThis is your chance to create a team from scratch and build a capability that is essential to our company’s success. This is a technical role -- we are looking for a person who will code as well hire and manage a team of engineers to do the work. The position demands strong communication, planning, and management abilities. \n\nPRINCIPAL DUTIES & RESPONSIBILITIES\n\nLead and grow the Toolchain Software Engineering organization, building a highly professional and motivated group. \nDeliver example content and integrations with consistently high quality and reliability, in a timely and predictable manner. \nResponsible for the overall toolchain and example life cycle including testing, updates, design, and, open-source sharing, and documentation.\nManagement of departmental resources, staffing, and building a best-of-class engineering team.\nManage customer support issues in order to deliver a timely resolution to their software issues.\n\nESSENTIAL KNOWLEDGE, SKILLS, AND EXPERIENCE \n\nBS or MS in Computer Science or related field\nAt least 3-5 years of development experience building software or software tools\nMinimum of 1 year of experience at the Project Manager or engineering lead position\nExcellent verbal and written communication skills\nTechnical experience in the following areas preferred:\nPython, Docker, SQL, AWS, Azure, or GCP.\nUnderstanding data science, data visualization, data quality, or data integration \nJenkins, DevOps, CI/CD\n\nPERSONALITY TRAITS\n\nLeadership with flexibility and self-motivation – with a problem solver's attitude. \nHighly effective written and verbal communication skills with a collaborative work style\nCustomer focus, and keen desire to make every customer successful\nAbility to create an open environment conducive to freely sharing information and ideas\n\nOur company is committed to being remote-first, with employees in Cambridge MA, various other states, Buenos Aires Argentina, Italy, and other countries. You must be located within GMT+2 (e.g. Italy) to GMT-8 (e.g. CA). We will not consider candidates outside those time zones. We do not work with recruiters. \n\nDataKitchen is profitable and self-funded and located in Cambridge, MA, USA. \n \n\n#Salary and compensation\n$50,000 — $85,000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at DataKitchen

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Prominent Edge


This position is a Remote OK original posting verified closed
🇺🇸 US-only

Lead DevOps Engineer


Prominent Edge

🇺🇸 US-onlyOriginally posted on Remote OK

devops

 

aws

 

gcp

 

devops

 

aws

 

gcp

 

azure

This job post is closed and the position is probably filled. Please do not apply.
We are looking for a Lead DevOps engineer to join our team at Prominent Edge. We are a small, stable, growing company that believes in doing things right. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want engineers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Many of our projects are web applications which often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at https://prominentedge.com/ for more information and apply through https://prominentedge.com/careers.\n\nRequired skills:\n* Experience as a Lead Engineer.\n* Minimum of 8 years of total experience to include a minimum of 1 years of web or software development experience.\n* Experience automating the provisioning of environments by designing, implementing, and managing configuration and deployment infrastructure as code solutions.\n* Experience delivering scalable solutions utilizing Amazon Web Services: EC2, S3, RDS, Lambda, API Gateway, Message Queues, and CloudFormation Templates.\n* Experience with deploying and administering kubernetes on AWS or GCP or Azure.\n* Capable of designing secure and scalable solutions.\n* Strong nix administration skills.\n* Development in a Linux environment using Bash, Powershell, Python, JS, Go, or Groovy\n* Experience automating and streamlining build, test, and deployment phases for continuous integration\n* Experience with automated deployment technologies such as Ansible, Puppet, or Chef\n* Experience administering automated build environments such as Jenkins and Hudson\n* Experience configuring and deploying logging and monitoring services - fluentd, logstash, GeoHashes, etc.\n* Experience with Git/GitHub/GitLab.\n* Experience with DockerHub or a container registry.\n* Experience with building and deploying containers to a production environment.\n* Strong knowledge of security and recovery from a DevOps perspective.\n\nBonus skills:\n* Experience with RabbitMQ and administration.\n* Experience with kops.\n* Experience with HashiCorp Vault, administration, and Goldfish; frontend Vault UI.\n* Experience with helm for deployment to kubernetes.\n* Experience with CloudWatch.\n* Experience with Ansible and/or a configuration management language.\n* Experience with Ansible Tower; not necessary.\n* Experience with VPNs; OpenVPN preferable.\n* Experience with network administration and understanding network topology and architecture.\n* Experience with AWS spot instances or Google preemptible.\n* Experience with Grafana administration, SSO (okta or jumpcloud preferable), LDAP / Active Directory administration, CloudHealth or cloud cost optimization.\n* Experience with kubernetes-based software - example - heptio/ark, ingress-nginx, anchore engine.\n* Familiarity with the ELK Stack\n* Familiarity with basic administrative tasks and building artifacts on Windows\n* Familiarity with other cloud infrastructures such as Cloud Foundry\n* Strong web or software engineering experience\n* Familiarity with security clearances in case you contribute to our non-commercial projects.\n\nW2 Benefits:\n* Not only you get to join our team of awesome playful ninjas, we also have great benefits:\n* Six weeks paid time off per year (PTO+Holidays).\n* Six percent 401k matching, vested immediately.\n* Free PPO/POS healthcare for the entire family.\n* We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.\n* Want to take time off without using vacation time? Shuffle your hours around in any pay period.\n* Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, we’ll buy you the new version whenever you want.\n* Want some training or to travel to a conference that is relevant to your job? We offer that too!\n* This organization participates in E-Verify.\n\nAbout You:\n* You believe in and practice Agile/DevOps.\n* You are organized and eager to accept responsibility.\n* You want a seat at the table at the inception of new efforts; you do not want things "thrown over the wall" to you.\n* You are an active listener, empathetic and willing to understand and internalize the unique needs and concerns of each individual client.\n* You adjust your speaking style for your audience and can interact successfully with both technical and non-technical clients.\n* You are detail-oriented but never lose sight of the Big Picture.\n* You can work equally well individually or as part of a team.\n* U.S. citizenship required\n\n\n\n\n#Location\n🇺🇸 US-only


See more jobs at Prominent Edge

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

PolySwarm


This position is a Remote OK original posting verified closed
🌏 Worldwide

Senior Front End Developer


PolySwarm

🌏 WorldwideOriginally posted on Remote OK

react

 

javascript

 

react

 

javascript

 

html

This job post is closed and the position is probably filled. Please do not apply.
We are looking for an experienced full-stack developer, focused on web front ends, to lead the development of our product UI/UX.\n\nWe are a cyber security company with many projects. Our web application is multi-lingual and has a base set of functionality, but we have an extensive list of features in our goal of a peerless product. Architecting, understanding our extensive backend, and implementing these features would be about 95% of your time. Additionally, our marketing websites are multi-lingual and updated about once per month, so this would be about 5% of your time.\n\n**Join PolySwarm.**\n\nWe're developing innovative solutions to age-old information security problems - and we need your help.\n\nPolySwarm is a marketplace that produces crowdsourced threat intelligence (malware detection today, more tomorrow).\n\nNo one has done this before. We'll get things wrong - that's okay! With your help, we'll get fewer things wrong, identify mistakes earlier, and improve processes to prevent future missteps.\n\nYou're in on the ground floor - you'll have a say in what we do and how we do it. By joining PolySwarm, you'll be joining a dynamic team on the bleeding edge of information (computer) security and blockchain - answering questions few have thought to ask.\n\n**If You Are:**\n\n* proficient with Docker, JavaScript, HTML, CSS/SASS, React, Redux, NodeJS, TypeScript, Jest, Storybook, and Gatsby\n* familiar with Python or Rust\n* experienced at creating clean/efficient UX\n* experienced building both the front-end and back-end for a web application\n* experienced with payment processing services like Stripe\n* experienced at developing/managing a multi-language web application\n* proficient in speaking/writing/reading English\n\n... then we are interested in you.\n\n\n**The Ideal Candidate Is**\n\nindependently motivated & self-directing\nintrospective: able to identify weak spots / problem areas in our existing processes or code and suggest / implement solutions\ninterested in creating a top quality user experience for both desktop and mobile users\ninterested in web application development\nhas an eye for design\n\n**We Offer**\n\n* Competitive salaries\n* Excellent health, dental, vision coverage\n* Paid vacation days\n* Flexible work hours - we have core hours on weekdays during US business hours, but outside of scheduled meetings, but if you want to start a little earlier or stay a little later, that's up to you..\n* Remote Ok - You can work remotely, or you can work from one of our offices when this Covid-19 stuff ends.\n* Powerful servers, laptops, desktops - whatever you need to be most productive!\n\n*In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification form upon hire.*\n\n**About PolySwarm**\n\nThe PolySwarm Team is made up of InfoSec veterans with decades of experience in government and industry. We’re driven to improve the threat intelligence landscape for ourselves, our clients and the industry at large. By providing robust incentives that align participants’ interest with continued innovation, PolySwarm will break the mold of today’s iterative threat intelligence offerings.\n\nAll PolySwarm (Co-)Founders are also members of Narf Industries, LLC, a boutique information security firm specializing in tailored solutions for government and large enterprises. Narf operates on the cutting-edge of InfoSec, blockchain and cryptographic research, having recently completed a blockchain-based identity management project for the Department of Homeland Security (DHS) as well as several cutting-edge partial homomorphic encryption projects on behalf of DARPA.\n\nFor more about the team and the team's advisers, head over to: https://polyswarm.io/team\n\nTo see our web application, head over to: https://polyswarm.network\n\n**What we use:**\n\n* Docker\n* JavaScript\n* HTML\n* CSS/SASS\n* React, Redux\n* NodeJS\n* TypeScript\n* Templating\n* Gatsby\n* Storybook\n* Jest\n\n**Bonus skills:**\n\n* Python\n* Rust\n* Elastic Search\n* Kibana\n* Kubernetes \n\n#Salary and compensation\n$100,000 — $150,000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at PolySwarm

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

InReach Ventures

 This job is getting a relatively high amount of applications currently (11% of viewers clicked Apply)

This position is a Remote OK original posting verified closed
UK or Italy

Back End Developer  This job is getting a relatively high amount of applications currently (11% of viewers clicked Apply)


InReach Ventures

UK or ItalyOriginally posted on Remote OK

java

 

aws

 

java

 

aws

 
This job post is closed and the position is probably filled. Please do not apply.
InReach is changing how VC in Europe works, for good. Through data, software and Machine Learning, we are building an in-house platform to help us find, reach-out to and invest in early-stage European startups, regardless of the city or country they’re based in.\n\nWe are looking for a back-end developer to continue the development of InReach’s data services. This involves: \n* Cleaning / wrangling / merging / processing the data on companies and founders from across Europe\n* Building data pipelines with the Machine Learning engineers\n* Building APIs to support front-end investment product used by the Investment team (named DIG)\n\nThis role will involve working across the stack. From DevOps (Terraform) to web scraping and Machine Learning (Python) all the way to data pipelines and web-services (Java) and getting stuck into the front-end (Javascript). It’s a great opportunity to hone your skills and master some new ones.\n\nIt is important to us that candidates be passionate about helping entrepreneurs and startups. This is our bread-and-butter and we want you to be involved.\n\nInReach is a remote-first employer and we are looking to this hire to help us become an exceptional place to work for remote employees. Whether you are in the office or remote, we are looking for people with excellent written and verbal communication skills.\n\n### Background Reading:\n* [InReach Ventures, the 'AI-powered' European VC, closes new €53M fund](https://techcrunch.com/2019/02/11/inreach-ventures-the-ai-powered-european-vc-closes-new-e53m-fund/?guccounter=1)\n* [The Full-Stack Venture Capital](https://medium.com/entrepreneurship-at-work/the-full-stack-venture-capital-8a5cffe4d71)\n* [Roberto Bonanzinga starts InReach Ventures with DIG platform](https://www.businessinsider.com/roberto-bonanzinga-starts-inreach-ventures-with-dig-platform-2015-11?r=US&IR=T)\n* [Exceptional Communication Guidelines](https://www.craft.do/s/Isrjt4KaHMPQ)\n\n## Responsibilities\n\n* Creatively and quickly coming up with effective solutions to undefined problems\n* Choosing technology that is modern but not hype-driven\n* Developing features and tests quickly with good, clean code\n* Being part of the wider development team, reviewing code and participating in architecture from across the stack\n* Communicating exceptionally, both asynchronously (written) and synchronously (spoken)\n* Helping to shape InReach as a remote-first organization\n\n## Technologies\n\nGiven that this position touches so much of the stack, it will be difficult for a candidate that only has experience in Python or only in Java to be successful in being effective quickly. While we expect the candidate to be stronger in one or the other, some professional exposure is required.\n\nIn addition to the programming skills and the ability to write well designed and tested code, infrastructure within modern cloud platforms and sound architectural reasoning are expected.\n\nNone of these are a prerequisite, but help:\n* Functional Programming\n* Reactive Streams (RxJava2)\n* Terraform\n* Postgres\n* ElasticSearch\n* SQS\n* Dynamodb\n* AWS Lambda\n* Docker\n* Dropwizard\n* Maven\n* Pipenv\n* Javascript\n* React\n* NodeJS\n\n## Interview Process\n* 15m video chat with Ben, CTO to find out more about InReach and the role\n* 2h data pipeline technical test (Python)\n* 2h web service technical test (Java)\n* 30m architectural discussion with Ben, talking through the work you did\n* 2h interview with the different team members from across InReach. We’re a small company so it’s important we see how we’ll all work together - not just the tech team!\n \n\n#Salary and compensation\n$55,000 — $70,000/year\n\n\n#Location\nUK or Italy


See more jobs at InReach Ventures

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Vizibl

 This job is getting a relatively high amount of applications currently (10% of viewers clicked Apply)

This position is a Remote OK original posting closed
🇪🇺 EU-only

Backend Engineer  This job is getting a relatively high amount of applications currently (10% of viewers clicked Apply)


Vizibl

🇪🇺 EU-onlyOriginally posted on Remote OK

kubernettes

 

kubernettes

 

gcp

This job post is closed and the position is probably filled. Please do not apply.
**At Vizibl, we’re on a mission to help every company work better, together. We want to help all companies make a difference in the world by revolutionising the way they work together, empowering them to reach their full potential.\nWe’re off to a great start too. Teams in some of the world’s largest enterprise companies are already collaborating with their suppliers through Vizibl and transforming the way they work to drive innovation together.**\n\nWe welcome people from all backgrounds who seek the opportunity to help build a future where every company sees the benefit of working openly and collaboratively. If you have the passion, curiosity and collaborative spirit, work with us, and let’s help every company work better, together.\n\nVizibl is a growing SaaS platform used by the world’s largest organisations to help change the way they work. Our unique blend of Enterprise know-how coupled with our beautiful and usable products is one of the things our customers love about us.\n\nVizibl is looking for a talented Back End Engineer that’s passionate about building scalable, maintainable, performant backend services that put security first without compromising on our commitment to openness. As Vizibl grows, so do our ambitions for the future of our backend services, which is why this is a great opportunity for the right person to join a talented team to drive exciting new projects that will help change the way the world’s largest companies work with each other.\n\nThis person will work across our backend services to help maintain our REST api, develop solutions to new problems, be involved in the design and architecture of the platform and work collaboratively to support the growth of the platform. The ideal candidate is a self-motivated person that cares deeply about building excellent products. They don’t settle for OK and have a desire to integrate themselves deeply into the working of the business.\n\nAs this is a fully remote position we'll be looking for strong communication skills and the ability to motivate yourself and your team to work independently.\n\nIf you’re interested in building products that challenge the status quo in the enterprise space and you enjoy an abundance of autonomy with just the right amount of alignment then we’d love to hear from you.\n\n# Responsibilities\n **Working for Vizibl you'll..**\n* Have a huge amount of autonomy\n* Work remotely\n* Work with cutting edge technologies\n* Manage and support applications in production on our Kubernetes cluster\n* Contribute to the design and architecture and development processes for a system used by the world’s largest enterprise organisations\n* Be involved in the planning and development of solutions\n* Be an ambassador for our product values\n* Work with an amazing team of people spread out across Europe\n* Contribute to a positive and empowering company culture \n\n# Requirements\n**What You’ll Need**\n* Have experience working in a professional engineering team\n* 3+ years of Python experience (strong candidates with experience in another language may be considered)\n* DevOps experience with Docker, Kubernetes, Google Cloud, etc\n* Experience building production-ready REST APIs\n* Strong skills in information security architecture and security best practices\n* Understanding of data modelling and querying in (Postgres) SQL\n* Experience with Git\n* English fluency and excellent communication skills\n* Experience with TDD/BDD methodologies\n* A desire to learn and improve\n\n**We’ll be impressed if**\n* You have experience working in an agile team\n* You have experience working in a remote team\n* You have experience with queuing systems like Celery, Kafka\n* You have worked on products that have been subject to regular security audits\n* You write about back-end technologies\n* You have frontend Javascript experience\n* You have experience architecting complex systems\n* You have experience scaling web applications\n* You’re familiar with the enterprise project management space\n* You’ve integrated with large corporate IT environments before \n\n#Salary and compensation\n$72,000/year\n\n\n#Location\n🇪🇺 EU-only


See more jobs at Vizibl

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Pupil Labs


This position is a Remote OK original posting verified closed
🌏 Worldwide

Senior Full Stack Developer


Pupil Labs

🌏 WorldwideOriginally posted on Remote OK

devops

 

kubernetes

 

devops

 

kubernetes

 
This job post is closed and the position is probably filled. Please do not apply.
We are looking to hire a Senior Full Stack Developer with solid Devops experience to join our software team (Remote or onsite in Berlin, Germany or Bangkok, Thailand ) on a full-time basis.\n\nYou will be working on a core product - Pupil Cloud - that will be integral to our eye-tracking platform. This product addresses a number of exciting computational and infrastructural challenges, that will involve close collaboration with our R&D and Design teams. \n\nPupil Labs is the world-leading provider of wearable eye-tracking solutions. We design, engineer, build, and ship hardware and analysis tools that are used by thousands of researchers in a variety of fields, ranging from medicine and psychology to UX design and human-computer-interaction.\n\n# Responsibilities\n You will be working with a team of software engineers to build a cloud-based storage, visualization, enrichment, and analysis platform. \n\n# Requirements\n* 3+ years of production experience\n* DevOps based around Kubernetes + Docker\n* Experience with Chef/Ansible/Puppet\n* Solid grasp of Python\n* Understands security\n* Experience with web-based services\n* Monitoring, implementing, and ensuring reliability of HA systems\n* Experience with message queues\n* Load/stress testing\n* Part of 24x7 on-call rota (Ideal timezone between UTC-3 and UTC-9)\n* You are highly independent and motivated\n* You care deeply about performance\n* You have a strong command of spoken and written English\n\n**Bonus**\n* Experience with Gitlab based CI/CD\n* Experience with Deep Learning frameworks eg. Tensorflow/Torch\n* Experience with video processing in cloud systems\n* Experience working with partitioned datasets for scalability\n* SQL experience\n\n**Technology we use**\n* Docker\n* Kubernetes\n* Postgresql\n* Python\n* Javascript\n* Redis\n* Nginx\n* Grafana\n* Prometheus\n\n\n#Location\n🌏 Worldwide


See more jobs at Pupil Labs

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

PolySwarm


This position is a Remote OK original posting verified closed
🌏 Worldwide

Senior Fullstack Developer


PolySwarm

🌏 WorldwideOriginally posted on Remote OK

javascript

 

javascript

 

blockchain

This job post is closed and the position is probably filled. Please do not apply.
We are looking for an experienced full-stack developer, focused on web front ends, to lead the development of our product UI/X.\n\nWe are a cyber security company with many projects. Our web application is multi-lingual and has a base set of functionality, but we have an extensive list of features in our goal of a peerless product. Architecting, understanding our extensive backend, and implementing these features would be about 95% of your time. Additionally, our marketing websites are multi-lingual and updated about once per week, so this would be about 5% of your time. \n\n\n**Join PolySwarm.**\n\nWe're developing innovative solutions to age-old information security problems - and we need your help.\n\nPolySwarm is a marketplace that produces crowdsourced threat intelligence (malware detection today, more tomorrow). \n\nNo one has done this before. We'll get things wrong - that's okay! With your help, we'll get fewer things wrong, identify mistakes earlier, and improve processes to prevent future missteps.\n\nYou're in on the ground floor - you'll have a say in what we do and how we do it. By joining PolySwarm, you'll be joining a dynamic team on the bleeding edge of information (computer) security and blockchain - answering questions few have thought to ask.\n\nIf you are:\n\n* proficient with Python, Docker, JavaScript, HTML, CSS/SASS, React, Redux, NodeJS, TypeScript, and Templating (Mustache/Handlebars)\n* familiar with Jekyll/Gatsby\n* experienced at creating clean/efficient UX\n* experienced building both the front-end and back-end for a web application\n* experienced at developing/managing a multi-language web application\n\n... then we are interested in you.\n\n**The Ideal Candidate Is...**\n\n* independently motivated & self-directing\n* introspective: able to identify weak spots / problem areas in our existing processes or code and suggest / implement solutions\n* interested in creating a top quality user experience for both desktop and mobile users\n* interested in web application development\n* has an eye for design\n\n\n**We Offer**\n\n* Competitive salaries\n* Excellent health, dental, vision coverage\n* Paid vacation days\n* Travel (if you like). We have offices in San Diego, Puerto Rico and Tokyo and we often find ourselves travelling elsewhere. If travel interests you, we can scratch that itch.\n* Flexible work hours - we have core hours on weekdays, but outside of scheduled meetings, we don't care *when* you work, we care about your output.\n* Remote Ok - You can work remotely, or you can work from one of our offices\n* Powerful servers, laptops, desktops - whatever you need to be most productive!\n\n In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification form upon hire.\n\n**About PolySwarm**\n\nThe PolySwarm Team is made up of InfoSec veterans with decades of experience in government and industry. We’re driven to improve the threat intelligence landscape for ourselves, our clients and the industry at large. By providing robust incentives that align participants’ interest with continued innovation, PolySwarm will break the mold of today’s iterative threat intelligence offerings.\n\nAll PolySwarm (Co-)Founders are also members of Narf Industries, LLC, a boutique information security firm specializing in tailored solutions for government and large enterprises. Narf operates on the cutting-edge of InfoSec, blockchain and cryptographic research, having recently completed a blockchain-based identity management project for the Department of Homeland Security (DHS) as well as several cutting-edge partial homomorphic encryption projects on behalf of DARPA.\n\nFor more about the team and the team's advisers, head over to: https://polyswarm.io/the_team\n\nTo see our web application, head over to: https://polyswarm.network\n \n\n#Salary and compensation\n$100,000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at PolySwarm

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Snowdrop


This position is a Remote OK original posting verified closed
🇪🇺 EU-only

Senior Software Engineer


Snowdrop

🇪🇺 EU-onlyOriginally posted on Remote OK

js

 

js

 

flask

This job post is closed and the position is probably filled. Please do not apply.
Snowdrop is an overdraft protection service. Americans pay $30bn per year in overdraft fees, the bulk of which is paid by the poor. Our software monitors our users’ bank accounts in real-time, and we offer them a small advance if their accounts get close to a negative balance, to help prevent them from being charged overdraft fees. Our mission is to give those $30bn back to bank customers while earning a small fraction of it as our revenue.\n\nWe are looking for a Senior Software Engineer to join our team. \n\nThe team is still small and the business is growing extremely fast. Because of that, we are looking for someone who has the ability to and gets excited by taking on wide responsibility across different parts of the system.\n\n# Responsibilities\n * Work closely with our CEO and the rest of the team on defining the product roadmap\n* Take full ownership from idea to implementation of core components of our system\n* Collaborate with our data scientists to develop data pipelines to extract insights\n* Help the team define and improve engineering standards, processes, and tooling \n\n# Requirements\n* Experience as a software engineer, with a proven track record of delivering high-quality code\n* Working experience with Python and Javascript\n* Experience with Docker, Flask, Django, PostgreSQL, Vue and system security is a plus\n* Fast learner, and excited to work on products from idea to successful completion\n* Experience working in a remote setting is a plus \n\n#Salary and compensation\n$40-80,000 based on qualifications/year\n\n\n#Location\n🇪🇺 EU-only


See more jobs at Snowdrop

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

MageMojo


This position is a Remote OK original posting verified closed
🌏 Worldwide

Senior SRE Kubernetes


MageMojo

🌏 WorldwideOriginally posted on Remote OK

saltstack

 

kubernetes

 

saltstack

 

kubernetes

 

jenkins

This job post is closed and the position is probably filled. Please do not apply.
# **Responsibilities**\nWe are looking for a Senior SRE to join our team and help develop our product, Mojo Stratus. Mojo Stratus is a Kubernetes / Docker PaaS for e-commerce (Magento Commerce) hosting. You will work on increasing uptime, building new features, and improving our deployment processes. You'll be working daily with Kubernetes and Docker in an AWS environment. You will also be using SaltStack, Jenkins, Terraform, and Python.\n# **About Us**\n\nMageMojo Magento hosting, a group of 40 talented devops peeps whom all work remotely. We believe passionate, talented people all working together smoothly yields awesome work that lets us build a solid infra and processes to prevent fires instead of spending time always putting out fires. We get along and we constantly improve only because we don't bullsh!t each other or our clients, we don't hide or say what we think others want to hear. We do this with respect and we value truth, transparency, and honesty above all else. Of course, there are times when we are in headphones-on, hyper-concentration mode. But we also draw a lot of support from each other and try to focus on the "human side" of support. We are curious students of the internet age who are interested in continuing to enhance our own work, sharing what we've learned, and learning from those around us.\n\n# **About You**\n\nYou are a solid human being with a good sense of humor in search of a job with a crew that is big enough to host important, meaningful sites and small enough to have fun doing it. Attention to detail and seamless customer experience are important to you. You feel at home in the shell and have some scripting knowledge. You know there's nothing you can't do and no problem you can't solve with the help of the Interwebz, and Google of course. You have strong opinions about the way things should be done but aren't necessarily a zealot for any one process, technology, or denomination. You're inclined to express yourself through animated gifs and obscure movie quotes from the youtubes. You work well at the 11th hour, but even better at the first and second so we can be out at end of shift. You have an ear to the ground for new tech, whether it comes from hacker news or a programming subreddit, and a desire to dive in and try things out. \n\n# Requirements\n**Minimum Qualifications**\n\n* 3+ years building and maintaining YOUR OWN Kubernetes cluster (not GKE / EKS / Kops / / Rancher).\n* 3+ years working with either SaltStack or Ansible.3+ years working with AWS.\n* 3+ years experience with CI/CD tooling with either Jenkins / Gitlab / Drone.io.\n* 3+ years of experience working in a 24x7x365 mission-critical environment.\n\n**Bonus Experience:**\n\n* HPA\n* Cilium\n* BPF\n* Drone.io\n* Helm Charts\n* Magento Hosting / Development\n* Previous employment at a hosting provider \n\n#Salary and compensation\n$120,000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at MageMojo

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Screenly


This position is a Remote OK original posting verified closed

Frontend and Python Developer


Screenly

Originally posted on Remote OK

javascript

 

django

 

javascript

 

django

 

html5

This job post is closed and the position is probably filled. Please do not apply.
[Screenly](https://www.screenly.io), the digital signage company, seeks a front-end developer to help create a great looking modern and responsive web UI. We're looking for people who are ambitious and deeply driven to make a difference. \n\nYour ability as a general pragmatic developer and fast learner is what truly matters to us. As you read the specifics below, keep in mind that the skill we are looking for is more in the lines of "awesome solver of challenges" than any one specific skill. \n\nYou would work on JavaScript, React, HTML, CSS and other techs to create a great looking modern and responsive web driven UI. You would also be ready to dive into the Django and Python underpinnings and add the bits necessary for a great user experience. Underneath it all you'll find Postgres, Redis and a good heaping of Docker and Kubernetes.\n\nWhile this ad is about the front-end, we're really into jacks-of-all trades. We think in a startup every team member needs to be nimble and flexible enough to do whatever it takes to deliver the product.\n\nYou can expect to work with a small full time team of crafty developers, in a quickly growing startup. We're a remote only shop so you'll never feel you're not in the loop due to not being in the main office -- there is no main office! You can learn more about how we work [here](https://www.screenly.io/blog/2016/11/23/how-we-work-at-screenly/).\n\nYou like:\n\n* Detail oriented front-end work.\n* Great UX.\n* Occasionally fun UI and UX.\n* Unit tests, integration tests, ui tests, test tests, every test.\n* Python.\n\nThis is a full-time position. We only hire individuals, not agencies. Please state the square root of twenty five in your cover letter. You will need to attend a daily meeting and a fortnightly video planning meeting (around 16:00 UTC) so please don’t apply if that doesn’t work for you or you can’t work a semi-regular 5-day week. Hours are very flexible, but you do have to be around often enough for the rest of the team to interact with you.


See more jobs at Screenly

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Platform.sh


This position is a Remote OK original posting closed

Devopssre


Platform.sh

Originally posted on Remote OK

aws

 

aws

 

magento

This job post is closed and the position is probably filled. Please do not apply.
Mission \n\nTo reinforce our technical prowess, we are looking to grow our operations team. If you’re looking for an exciting, high-growth opportunity with an award-winning, cutting-edge company, this could be just the job for you\n\nFor its PaaS solution https://platform.sh is looking for an Operations and Service Reliability Engineer with a taste for Python and Go, great Linux system understanding, and a real hunger for the challenges of building robust, distributed systems.\n\nPlatform.sh is a PaaS shrouded in a lot of black magic (we can consistently clone a whole running cluster, with its state, databases, indexes in a matter of seconds). We want to get this down to the hundreds of milliseconds domain. Interested? There is more...\n\nOur external API is pure Hypermedia REST + oAuth on top of Pyramid. It mechanizes the Git layer and needs more features.\n\nWe can consistently generate from the same manifest a Docker container, an LXC one, or VM disk images (AWS, Azure, OpenStack), we want more targets.\n\nWe probably have the highest industry container density. We need to get it higher.\n\nWe support any Python, Ruby, NodeJS or PHP, Java and .NET, time to roll-out Elixir, of course, Elixir (and Rust. We need Rust).\n\nWe need to have more auto-healing on the high-availability clusters. We need more performance out of our multi-protocol ssh proxy. We need work on our Ceph Implementation. We need to get the Debian package generation streamlined and faster. We need… great ideas on how to make Platform.sh even better.\n\n\nAbout Platform.sh \n\nPlatform.sh is an idea-to-cloud application platform that simplifies cloud infrastructures.\n\nWe give developers the tools they need to experiment, innovate, get rapid feedback and deliver better-quality features with speed and confidence thanks to our unique rapid cloning technology.\n\nPlatform.sh serves thousands of customers worldwide including The Financial Times, Gap, Magento Commerce, Orange, Hachette, Ikea, Stanford University, Harvard University, The British Council, and Lufthansa.\n\nWe want people who are passionate, open, multicultural, friendly, humble and smart to join us and help this fast-growing, award-winning company to revolutionize the tech industry.\n\n# Responsibilities\n Directly reporting to our VP of Infrastructure and in close interaction with our Engineering and Customer Support teams, you will be responsible for:\n\n* cloud operations: configure clusters, deploy stuff, follow-up on alerts, help customer support debug issues.\n* automating all of the above so they can instead drink margaritas (or non-alcoholic beverages, of course)\n* creating systems, tools & processes that will enhance our support and operations efficiency\n* improving service quality, discipline and reliability throughout lifecycle\n* monitoring operating objectives, streamline and automate intervention\n* continuous learning from Operations experience, modeled as software\n \n\n# Requirements\nThe ideal candidate:\n\nhas proven successful experience in an operations role,\nhas demonstrated the ability to successfully manage cloud-based infrastructure for a fast growing organization\nhas experience with containerization technologies\nhas had exposure to cloud services (AWS)\nunderstands how an OS works, knows networking, how git works, and the constraints of a distributed system\nPuppet experience\nis proficient in Python (Golang a plus)\n\nNice to have :\n\nknowledge of Magento Ecommerce, Symfony, Drupal, eZ Platform, or Typo3\nrelational database skills\npublic speaking experience\nability to kick ass in Chess\nproficiency in Rust grants you bonus points\nNote: We don't like stress, so we build everything to be robust and resiliant, but stuff does break. This is a role with on-call duties. If page-duty fills you with dread... well, this might not be a fit.


See more jobs at Platform.sh

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Density Inc.


This position is a Remote OK original posting closed

DevOps Engineer


Density Inc.

Originally posted on Remote OK

devops

 

nomad

 

devops

 

nomad

 

django

This job post is closed and the position is probably filled. Please do not apply.
When someone installs Density in a location, they get access to real time, accurate people count. Our API is the foundational component that customers rely on when integrating and interacting with our data. Making people-count data highly available, in a real-time setting, means all underlying systems must be continuously operational and ready to scale at a moment's notice.\n\nWe're looking for an engineer to take the helm of our infrastructure and grow it to handle the needs of our product. This means playing a large role in both the hardware and software teams, crafting the deployment, orchestration, and management systems to power Density.\n\nWhile Density is a remote-friendly company, we have offices in San Francisco, New York City, and Syracuse, NY.\n\nThis position reports to Density's Head of Product.\n\nHere's what we're looking for\n\n- Strong writing skills; ability to craft clear and concise documentation\n- Strong background in Linux/Unix Administration\n- Experience with automation and configuration management using Ansible, Chef, Puppet or an equivalent\n- Experience with deployment orchestration using Nomad, Consul, and Docker\n- Knowledge of the AWS stack\n- Ability to design and manage CI / CD pipelines (CircleCi)\n- Strong grasp of modern Python development\n- Experience with management of networking and VPNs\n- Experience managing software change control and software review systems such as Gerrit\n- Experience managing software releases across multiple git repositories\n- Experience with relational, non-relational, and timeseries data stores\n\nIcing on the cake\n\n- An academic background in Computer Science (BSc or MSc) or equivalent\n- Experience building APIs and web applications (Django, Flask, Rails, etc)\n- Familiar with software build systems such as CMake, Autotools, and Make\n- Familiar with repo aggregators such as Android's git-repo


See more jobs at Density Inc.

Visit Density Inc.'s website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Canonical


This position is a Remote OK original posting closed

Senior Software Engineer


Canonical

Originally posted on Remote OK

senior

 

engineer

 

senior

 

engineer

 

linux

This job post is closed and the position is probably filled. Please do not apply.
Region for Hire: EMEA or Americas\n\nCanonical’s Snapcraft (https://snapcraft.io) makes it possible to deliver app updates to all of Linux automatically, eliminating the long tail of supported releases and complex install instructions.\nWith thousands of applications on the platform from over a thousand developers, including well-recognised names like Spotify, Slack, and Microsoft, the Snapcraft team’s mission is to uphold a high bar of quality as well as predictable, intuitive behaviour.\n\nWe are looking for a senior software engineer with background in developer tools to join our globally-distributed, home-based team.\n\nThis job involves international travel several times a year, usually for one week at a time.\n\n\n**Key responsibilities**\n* Our core mission is to make developers’ lives easier. You will have a keen sense of how Snapcraft can further reduce friction.\n* Snapcraft should be a joy to use. You have an eye for good user experience. You enjoy guiding the user through a journey or getting them back on rails with tasteful instruction.\n* Building snaps should feel familiar, building on the tools developers already know. You’ll be conversant in many languages, frameworks, integrations, and CI systems. You’ll teach these to produce snaps.\n* We’re a data-driven team. You’ll apply test-driven development, Sentry, and analytics to focus and refine your efforts.\n\n\n**Required skills and experience**\n* Expertise in Python or similar\n* Command line developer-oriented product experience\n* Experience with language packaging systems, such as PIP and NPM\n* Experience integrating with commercial CI systems, such as Travis and Circle CI\n* Experience working with containers, such as Docker and LXD\n* Hold yourself and others to a high standard when working with production deployments\n* Excellent communications skills in the English language, both verbal and written, especially in online environments such as Slack and Google Hangouts\n* Collaborate proactively within a distributed team\n* Demonstrable public speaking skills\n\n\n**Desirable skills and experience**\n* Portfolio of regular Open Source contributions and other public demonstrations of leadership\n* Experience working on a distributed team 


See more jobs at Canonical

Visit Canonical's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Croscon


This position is a Remote OK original posting verified closed

Python Web Developer


Croscon

Originally posted on Remote OK

dev

 

dev

 

celery

This job post is closed and the position is probably filled. Please do not apply.
### NYC (Or Remote)\nCroscon plans, builds, and grows digital products and services that help companies become leaders in their industry. Our clients range from startups looking to get to market with an MVP or large companies like Google, Soundcloud, NASA, SEIU, among many others.\nWe present a unique opportunity for a seasoned engineer to work on a variety of projects in a variety of industries. Our team is small, driven and pull off projects that others simply can’t. You’ll work directly with principal engineers and product leads to plan and execute digital products for our clients. Additionally, Croscon is incubating it’s own companies for which we retain full ownership and control.\nSuccess in this role will lead to engineering and/or product managerial roles.\nTo learn more, please visit: http://www.croscon.com/\n\n\n**Requirements:**\n- 3-4+ years of professional enterprise web development experience\n- Experience with modern backend web frameworks and libraries: Flask & Django\n- Expert in various database and modeling paradigms (MongoDB/NoSQL, MySQL, Postgres).\n- Experience with modern client web programming languages and standards: JavaScript, CSS3, HTML5, etc.\n- Deep understanding of web architecture including the HTTP protocol, caching proxies, REST services, etc.\n- Strong understanding of designing secure systems.\n- Experience with distributed architectures and measuring system performance metrics.\n- Expert in common software engineering practices, such as version control with Git, unit tests, continuous integration, and automated deployment.\n- Strong Experience with public cloud systems (Amazon Web Services, Google Cloud Platform).\n- ORM Experience -- SQLAlchemy / peewee\n- Queuing: Celery, RabbitMQ\n- Redis\n- Docker\n\n\n**Nice to have:**\n- Elasticsearch/Solr\n-Kubernetes\n- Serious DevOps or SysAdmin experience\n- Supervisor experience (or any other long running process manager experience, e.g., pm2)\n- Salt / Ansible experience\n- React / ES6 experience\n- Jenkins


See more jobs at Croscon

Visit Croscon's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Doximity


This position is a Remote OK original posting verified closed

Data Engineer Infrastructure


Doximity

Originally posted on Remote OK

elasticsearch

 

git

 

elasticsearch

 

git

 

engineer

This job post is closed and the position is probably filled. Please do not apply.
Why work at Doximity?\n\nDoximity is the leading social network for healthcare professionals with over 70% of U.S. doctors as members. We have strong revenues, real market traction, and we're putting a dent in the inefficiencies of our $2.5 trillion U.S. healthcare system. After the iPhone, Doximity is the fastest adopted product by doctors of all time. Our founder, Jeff Tangney, is the founder & former President and COO of Epocrates (IPO in 2010), and Nate Gross is the founder of digital health accelerator RockHealth. Our investors include top venture capital firms who've invested in Box, Salesforce, Skype, SpaceX, Tesla Motors, Twitter, Tumblr, Mulesoft, and Yammer. Our beautiful offices are located in SoMa San Francisco.\n\nYou will join a small team of data infrastructure engineers (4) to build and maintain all aspects of our data pipelines, ETL processes, data warehousing, ingestion and overall data infrastructure. We have one of the richest healthcare datasets in the world, and we're not afraid to invest in all things data to enhance our ability to extract insight.\n\nJob Summary\n\n-Help establish robust solutions for consolidating data from a variety of data sources.\n-Establish data architecture processes and practices that can be scheduled, automated, replicated and serve as standards for other teams to leverage. \n-Collaborate extensively with the DevOps team to establish best practices around server provisioning, deployment, maintenance, and instrumentation.\n-Build and maintain efficient data integration, matching, and ingestion pipelines.\n-Build instrumentation, alerting and error-recovery system for the entire data infrastructure.\n-Spearhead, plan and carry out the implementation of solutions while self-managing.\n-Collaborate with product managers and data scientists to architect pipelines to support delivery of recommendations and insights from machine learning models.\n\nRequired Experience & Skills\n\n-Fluency in Python, SQL mastery.\n-Ability to write efficient, resilient, and evolvable ETL pipelines. \n-Experience with data modeling, entity-relationship modeling, normalization, and dimensional modeling.\n-Experience building data pipelines with Spark and Kafka.\n-Comprehensive experience with Unix, Git, and AWS tooling.\n-Astute ability to self-manage, prioritize, and deliver functional solutions.\n\nPreferred Experience & Skills\n\n-Experience with MySQL replication, binary logs, and log shipping.\n-Experience with additional technologies such as Hive, EMR, Presto or similar technologies.\n-Experience with MPP databases such as Redshift and working with both normalized and denormalized data models.\n-Knowledge of data design principles and experience using ETL frameworks such as Sqoop or equivalent. \n-Experience designing, implementing and scheduling data pipelines on workflow tools like Airflow, or equivalent.\n-Experience working with Docker, PyCharm, Neo4j, Elasticsearch, or equivalent. \n\nOur Data Stack\n\n-Python, Kafka, Spark, MySQL, Redshift, Presto, Airflow, Neo4j, Elasticsearch\n\nFun Facts About the Team\n\n-We have one of the richest healthcare datasets in the world.\n-Business decisions at Doximity are driven by our data, analyses, and insights.\n-Hundreds of thousands of healthcare professionals will utilize the products you build.\n-Our R&D team makes up about half the company, and the product is led by the R&D team. \n-Our Data Science team is comprised of about 20 people.


See more jobs at Doximity

Visit Doximity's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Doximity


This position is a Remote OK original posting verified closed

Machine Learning Engineer


Doximity

Originally posted on Remote OK

git

 

machine learning

 

git

 

machine learning

 

data science

This job post is closed and the position is probably filled. Please do not apply.
Why work at Doximity?\n\nDoximity is the leading social network for healthcare professionals with over 70% of U.S. doctors as members. We have strong revenues, real market traction, and we're putting a dent in the inefficiencies of our $2.5 trillion U.S. healthcare system. After the iPhone, Doximity is the fastest adopted product by doctors of all time. Our founder, Jeff Tangney, is the founder & former President and COO of Epocrates (IPO in 2010), and Nate Gross is the founder of digital health accelerator RockHealth. Our investors include top venture capital firms who've invested in Box, Salesforce, Skype, SpaceX, Tesla Motors, Twitter, Tumblr, Mulesoft, and Yammer. Our beautiful offices are located in SoMa San Francisco.\n\nSkills & Requirements\n\n-3+ years of industry experience; M.S. in Computer Science or other relevant technical field preferred.\n-3+ years experience collaborating with data science and data engineering teams to build and productionize machine learning pipelines.\n-Fluent in SQL and Python; experience using Spark (pyspark) and working with both relational and non-relational databases.\n-Demonstrated industry success in building and deploying machine learning pipelines, as well as feature engineering from semi-structured data.\n-Solid understanding of the foundational concepts of machine learning and artificial intelligence.\n-A desire to grow as an engineer through collaboration with a diverse team, code reviews, and learning new languages/technologies.\n-2+ years of experience using version control, especially Git.\n-Familiarity with Linux, AWS, Redshift.\n-Deep learning experience preferred.\n-Work experience with REST APIs, deploying microservices, and Docker is a plus.\n\nWhat you can expect\n\n-Employ appropriate methods to develop performant machine learning models at scale, owning them from inception to business impact.\n-Plan, engineer, and deploy both batch-processed and real-time data science solutions to increase user engagement with Doximity’s products.\n-Collaborate cross-functionally with data engineers and software engineers to architect and implement infrastructure in support of Doximity’s data science platform.\n-Improve the accuracy, runtime, scalability and reliability of machine intelligence systems\n-Think creatively and outside of the box. The ability to formulate, implement, and test your ideas quickly is crucial.\n\nTechnical Stack\n\n-We historically favor Python and MySQL (SQLAlchemy), but leverage other tools when appropriate for the job at hand.\n-Machine learning (linear/logistic regression, ensemble models, boosted models, deep learning models, clustering, NLP, text categorization, user modeling, collaborative filtering, topic modeling, etc) via industry-standard packages (sklearn, Keras, NLTK, Spark ML/MLlib, GraphX/GraphFrames, NetworkX, gensim).\n-A dedicated cluster is maintained to run Apache Spark for computationally intensive tasks.\n-Storage solutions: Percona, Redshift, S3, HDFS, Hive, Neo4j, and Elasticsearch.\n-Computational resources: EC2, Spark.\n-Workflow management: Airflow.\n\nFun facts about the Data Science team\n\n-We have one of the richest healthcare datasets in the world.\n-We build code that addresses user needs, solves business problems, and streamlines internal processes.\n-The members of our team bring a diverse set of technical and cultural backgrounds.\n-Business decisions at Doximity are driven by our data, analyses, and insights.\n-Hundreds of thousands of healthcare professionals will utilize the products you build.\n-A couple times a year we run a co-op where you can pick a few people you'd like to work with and drive a specific company goal.\n-We like to have fun - company outings, team lunches, and happy hours!


See more jobs at Doximity

Visit Doximity's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
Stacktical is a Predictive Scalability Testing platform.\nIt ensures our customers design and ship softwares that always scale to the maximum of their ability and with minimum footprint.\nThe Stacktical Site Reliability Engineer is responsible for helping our customers engineer CI/CD pipeline around system testing practices that involve Stacktical.\nLike the rest of the team, they also actively participate in building the Stacktical platform itself.\nWe are looking for a skilled DevOps and Site Reliability Engineer, expert in Scalability, that is excited about the vision of using Predictive Analytics and AI to reinvent the field.\nWith a long-standing passion for automating your work and the work of others, you also understand how Software as a Service is increasingly empowering companies to do just that.\nYou can justify previous experiences in startups and you’re capable of working remotely, with great efficiency, in fast-paced, demanding environments. Ideally, you’d have a proven track record of working remotely for 2+ years.\nNeedless to say, you fully embrace the working philosophy of digital nomadism we’re developing at Stacktical and both the benefits and responsibilities that come with it.\nYour role and responsibilities includes the following :\n- Architecture, implementation and maintenance of server clusters, API and microservices, including critical production environments, in Cloud and other hosting configurations (dedicated, vps and shared).\n- Ensure the availability, performance and scalability of applications in respect of proven design and architecture best practices.\n- Design and execute Scalability strategies that ensure the scalability and the elasticity of the infrastructure.\n- Manage a portfolio of Softwares, their Development Life Cycle and optimize their Continuous Integration and Delivery workflows (CI/CD).\n- Automate the Quality & Reliability Testing of applications (Unit Tests, Integration Tests, E2E Tests, Performance and Scalability Tests).\n## Skills we are looking for\n- A 50-50 mix between Software Development and System Administration experience\n- Proficiency in Node.js, Python, R, Erlang (Elixir) and / or Go\n- Hands on experience in NoSQL / SQL database optimization (slow queries indexing, sharding, clustering)\n- Hands on experience in administering high availability and high performance environments, as well as managing large-scale deployments of traffic-heavy applications.\n- Extensive knowledge of Cloud Computing concepts, technologies and providers (Amazon AWS, Google Cloud Platform, Microsoft Azure…).\n- A strong ability to design and execute cutting edge System Testing strategies (smoke tests, performance/load tests, regression tests, capacity tests).\n- Excellent understanding of Scalability processes and techniques.\n- Good grasp of Scalability, Elasticity concepts and creative Auto Scaling strategies (Auto Scaling Groups management, API-based scheduling).\n- Hands on experience with Docker and Docker orchestration tools like Kubernetes and their corresponding provider management services (Amazon ECS, Google Container Engine, Azure Container Service...).\n- Hands on experience with leading Infrastructure as Code SCM tools like Terraform and Ansible\n- Proven ability to work remotely with teams of various sizes in same/different timezones, from anywhere and still remain highly motivated, productive, and organized.\n- Excellent English communication skills, including verbal, written, and presentation. Great email and Instant Messaging (Slack) proficiency.\nWe’re looking for a self learner always willing to step out her/his comfort zone to become better. An upright individual, ready to write the first and many chapters of the Stacktical story with us.\n## Life at our virtual office\nOur headquarters are in Paris but our offices and our clients are everywhere in the World.\nWe’re a fully distributed company with a 100% remote workforce. So pretty much everything happens on Slack and various other collaborative tools.\n## Remote work at Stacktical\nRemote work at Stacktical requires you to forge a contract with the Stacktical company, using your own billing structure.\nThat means you would either need to own a company or leverage a compatible legal status.\nLabour laws can be largely different from a country to another and we are not (yet) in a position to comply with the local requirements of all our employees.\nJust because you will be a contractor doesn’t make you less of a fully-fledged employee of Stacktical. In fact, even our founders are contractors too.\n## Compensation Package\n#### Fixed-price contract\nYour contract fixed-price is engineered around your expectations, our possibilities and the overall implications of remote work.\nLet’s have a transparent chat about it.\n#### Stock Options\nYes, joining Stacktical means you are entrusted to own part of the company.


See more jobs at Stacktical

Visit Stacktical's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
At Vime Labs we’re looking for a talented & passionate **Backend Developer** worldwide to become part of our team. If you thrive on challenges, agile development and are into innovative & revolutionary solutions, we are looking for you.\n\nNomads are welcome. Wherever you are, we would like to talk to you.\n\n__It is essential:__\n\n- Enthusiasm, be positive and dynamic.\n- Eager to learn and innovate.\n- Analytical and problem solving ability. But above all, creativity and imagination.\n- Ability to work remotely. Not hours but results.\n\n__Who we’re looking for:__\n\n- You have detailed knowledge of developing API backends using Python\n- You have good understanding of NoSQL Databases such as Mongo DB or Cassandra\n- You have good understanding of IaaS/PaaS such as AWS, Google or Azure\n- You have strong understanding of API Development patterns including REST\n- You have good English skills\n\n__Additional Qualifications Include:__\n\n- Strong understanding of developing API backends using different languages like Node/Java/Scala\n- Experience using Git or other Control Version Systems.\n- Experience using Task Queue Systems such as Celery.\n- Ability to design scalable systems\n- Experience using Devops tools such as Dockers.\n- Experience in designing persistence and caching models using NoSQL\n- Experience with Agile / Scrum Development.\n- Experience in Mobile QA / Testing.\n- Experience in XMPP Server\n\n__What we offer?__\n\n- International and transparent Startup Environment\n- Be as big as you want to be (as others say, “real possibilities for personal and professional growth”)\n- Apple MacBook Pro Equipment\n- Ability to work in remote and flexible schedules\n- Constant exposure to latest technologies and gadgets. Continuous learning and experience.\n- Competitive salary\n- All of our support for your personal and professional development (courses, events, conferences, hackathons, languages, …)\n- Full-Time Permanent\n- Club Mate imported from Berlin\n\nExtra tags: backend, python, mongodb, nosql, docker, dev, cloud


See more jobs at Vime Labs

Visit Vime Labs's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

FineTune Learning


closed

Senior Full Stack Software Developer Python Docker ReactJS


FineTune Learning


react

 

full stack

 

react

 

full stack

 
This job post is closed and the position is probably filled. Please do not apply.
\nFineTune is seeking a full stack developer with stronghands on experienceon python, pytest, docker, database performance and microservices. S/he must have experience working with at least 3 production released software projects/products that they have deployed on AWS or Google Cloud. S/he should behands on tech and comfortable solving complex system problems while making sure software team understands clearly what they arebuilding, while refactoring/architecting new and existing services to scale to support 5 million users.  A big plus if the developer has strong reactjs/redux, axios, graphql also.\n\n\nS/he should also be comfortable interacting with customer and provide guidance on the technical feasibility and scope of engineering/rearchitecting needed to solve problems and deliver features. Full stack developer will also work with QA team to find best ways to increase the performance of the development team and enhance software quality and development speed.\n\nS/he will interface with CTO to continuously drive innovation and new product development while promoting and advancing the scalability and modularization of current platform we are working on with Collegeboard. S/he will be essential member of the leadership team to drive company vision and mission while scaling the software for larger audience.


See more jobs at FineTune Learning

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

VOSTROM


closed

DevOps Engineer


VOSTROM


devops

 

javascript

 

infosec

 

devops

 

javascript

 

infosec

 

elasticsearch

This job post is closed and the position is probably filled. Please do not apply.
\nDevOps Engineer– Emphasis on Linux / Docker / Node.js / Elasticsearch / MongoDB\nThe Opportunity:\nWe're looking for an experienced DevOps engineer based in Phoenix, AZ, Virginia Beach, VA or the Washington, DC metro area, however remote (tele) workers will be considered for the position also if you have excellent communication skills and are willing to travel to one of the above locations several times per year.\nThe Day to Day:\n* Provide operational support and automation tools to application developers \n* Bridge the gap between development and operations to ensure successful delivery of projects \n* Participate as a member of the application development team \n* Build back-end frameworks that are maintainable, flexible and scaleable\n* Operate and scale the application back-end including the database clusters \n* Anticipate tomorrow's problems by understanding what users are trying to accomplish today \n\n\nRequirements:\n* DevOps experience with Linux or FreeBSD \n* Experience with Linux Containers and Docker \n* Configuration management experience, Salt Stack preferred \n* Exposure to the deployment and operations of node.js applications \n* Experience operating and optimizing Elasticsearch at large scale\n* Operational experience with Hadoop, MongoDB, Redis, Cassandra, or other distributed big data systems \n* Experience with any of JavaScript, Python, Ruby, Perl and/or shell scripting \n* Comfort with compute clusters and many terabytes of data \n* US Citizenship / Work Authorization\n\n\nBonus Points:\n* Development experience with Node.js or other HTTP backend tools\n* Mac OS X familiarity \n* BS or MS in a technology or scientific field of study\n* High energy level and pleasant, positive attitude!\n* Evidence of working well within a diverse team\n\n\nCompensation:\n* Salary commensurate with experience, generally higher than competitive industries\n* Comprehensive benefits package\n* Opportunities for advancement and a clear career path\n\n\nAbout Us:\nWe conduct advanced technical research and develop innovative software and systems that help meet network security and reliability challenges for organizations world-wide.  You can read more at our web site.  \nCareer Opportunities:\nWe have many other openings available. For a complete listing, visit jobs.vostrom.com


See more jobs at VOSTROM

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
161ms