FeedbackIf you find a bug, or have feedback, put it here. Please no job applications in here, click Apply on the job instead.Thanks for the message! We will get back to you soon.

[Spam check] What is the name of Elon Musk's company going to Mars?

Send feedback
Open Startup
RSS
API
Health InsurancePost a job

find a remote job
work from anywhere

Get a  email of all new Remote ๐Ÿ”ท Gcp Jobs

Subscribe
×

๐Ÿ‘‰ Hiring for a Remote ๐Ÿ”ท Gcp position?

Post a job
on the ๐Ÿ† #1 Remote Jobs board

Remote Health by SafetyWing


Global health insurance for freelancers & remote workers


Patients Know Best is hiring a Remote Cloud Engineer

**We are hiring across Europe, not just the EU.**\n\n\n## About us\n\nPatients Know Best is developing a personal health records service that is changing the way people manage their health, making their life easier and opening up a global market in the process. Weโ€™ve built a platform to help patients and clinicians share medical data online that is being used in the UK and abroad.\n\nWe are a fully distributed team, everyone is working remotely. The core values of our team:\n\n* we are here to make peopleโ€™s lives better: everyone at PKB is here to make a positive impact in the world\n* full transparency: all information is made available to everyone (unless thereโ€™s a really good reason not to)\n* flexibility, trust and outcome-focus: we care about the results of your work, not how and when you achieve it\n* support: everyone is encouraged to ask questions and help each other\n* continuous improvement on all levels: we iterate on software, but we also iterate on infrastructure, organisation, culture so that weโ€™re a better company every year that builds a better product every year\n\n## Our stack\n\nPKB services are hosted on Google Cloud Platform.\n\n* Infrastructure is managed with Terraform.\n* Our JVM-based back-end services are deployed to Kubernetes clusters\n* Data is stored in Postgres (both GCP-managed and self-hosted), Cloud Storage, BigQuery\n* Our CI server of choice is Teamcity\n* Monitoring is done with Prometheus and Grafana\n* Business intelligence & reporting is based on BigQuery, G Data Studio, G Pub/Sub\n\n## About the role\n\nYou will be the first fully infrastructure focused person in the team and you will be working on the cloud infrastructure, deployment pipeline and development environment. You will be able to build your own team as the company grows if you need further people.\n\nThe role requires broad skills: you should be able to\n\n* analyse the current state of the platform\n* prioritise work to achieve the best return on the development resources we invest\n* participate and lead implementation efforts\n* identify the proper metrics and measures to track the stability and performance of of the system\n* collaborate with other development teams to help them achieve their goals and improve their efficiency\neducate\n\nA few specific examples:\n\n* Better CI/CD: we have tons of tests -- it would take almost a day to run all of them on a common laptop. This is great for QA, less great for development experience.\n* Zero trust/BeyondCorp-like company infrastructure: designing a scalable, robust path forward to secure and deploy our internal tooling\n* Improving the monitoring of JVMs, clusters, applications\n* Migrating a self-hosted multi-terabyte Postgres cluster to a Google-managed instance\n\n## Requirements\n\nYou donโ€™t necessarily have to be proficient in all of these, but the more you know the better.\n\n* outstanding written communication skills and good verbal communication skills\n* experience with remote work\n* hands-on experience with CI/CD, e.g. testing and deploying dependent microservices\n* knowledge of Kubernetes, Terraform, e.g. managing secrets, resources, networks, Google Cloud Platform,\nJVM ecosystem, e.g. profiling, monitoring, tuning\n* networking, e.g. how to peer VPCs\n* databases, e.g. various replication methods, monitoring, PITR recovery\n* coding/automation experience: our primary need is automation, not administration\n\n\nWe are looking for candidates living in Europe as that makes virtual and (occasionally) in-person meetings easier to organise.\n\n## Benefits\n\nSupportive and smart colleagues, flexible work, opportunity to make a difference, 25 days holiday. Competitive salary. \n\n#Salary and compensation\n$60,000 — $120,000/year\n\n\n#Location\n๐Ÿ‡ช๐Ÿ‡บ EU-only


See more jobs at Patients Know Best

# How do you apply?\n\n https://apply.workable.com/patients/j/7C11C8D9FB/apply/
Apply for this job

Argyle


verified
๐ŸŒ Worldwide
 
๐Ÿ’ฐ $40k - $80k

python

 

celery

 

pydantic

 

playwright


Argyle is hiring a Remote Software Engineer

**Software Engineer (Crawling/Reverse Engineering)\nRemote - Europe/South America/Singapore/Taiwan/Thailand/Philippines**\n\n****$40k โ€“ $80k****\n\nArgyle is a remote-first, Series A fast-growing tech startup that has reimagined how we can use employment data.\n\nRenting an apartment, buying a car, refinancing a home, applying for a loan. The first question that they will ask you is, "how do you earn your money?" Wouldnโ€™t you think that information foundational to our society would be simple to manage, transfer and control? Well, itโ€™s not!\n\nArgyle provides businesses with a single global access point to employment data. Any company can process work verifications, gain real-time transparency into earnings and view worker profile details.\n\nWe are a fun and passionate group of people, all working remotely across 19 different countries and counting. We are now looking for multiple Scanner Engineers (Crawling/Reverse Engineering) to come and join our global team.\n\nYou will join a team of exceptionally talented engineers constantly looking for improvements and innovative ways to meet our business needs. Scrappers (we call them Scanners) are at the core of our business. It means you will constantly be fighting and innovating, owning and taking bold decisions.\n\n**What will you do?**\n\n- You will create, own and maintain Scanners\n- You will be contributing to general improvements such as shared libraries & frameworks\n- You will be closely working and communicating across different teams\n\n**Our stack**\n\nPython is our main language. Python libraries we use: celery, pydantic, playwright, puppeteer, beautifulSoup, asyncio, httpx, pydash, mypy, pytest, poetry, pyenv, poppler, PdfMiner. We run Docker, Kubernetes, GCP, Github, ArgoCD.\n\n**Requirements**\n\n- Experience in development of robust web scrapers\n- Reverse engineering knowledge of Android/iOS or JS/WebApps\n- Knowledge of bot and captcha bypass mitigation tactics\n- Python coding experience preferred\n- Big bonus points if you are familiar with Android/iOS device verification frameworks (SafetyNet Attestation/DeviceCheck) and ways to bypass them\n- Should be able to think and act fast in a startup environment\n- Shouldn't be scared by a bit of chaos and rapid changes\n- Taking ownership of your workload and commitments\n- We are big fans of not pointing at things but getting them fixed\n\n**Why Argyle?**\n\n- Remote first company\n- International environment\n- Flexible working hours\n- Stock Options\n- Flexible vacation leave\n- $1000 after a month of employment to set up your home office.\n- MacBook\n\nArgyle embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe the more inclusive we are, the better our company will be. \n\n#Salary and compensation\n$40,000 — $80,000/year\n\n\n#Location\n๐ŸŒ Worldwide


See more jobs at Argyle

# How do you apply?\n\n https://argyle.rippling-ats.com/job/219736/software-engineer-crawling-reverse-engineering
Apply for this job

Previous Remote ๐Ÿ”ท Gcp Jobs

Vizibl


verified closed
๐Ÿ‡ช๐Ÿ‡บ EU-only
 
๐Ÿ’ฐ $60k - $80k

python

 

flask

 

postgresql

 

fast api

This job post is closed and the position is probably filled. Please do not apply.
**At Vizibl, weโ€™re on a mission to help every company work better, together. We want to help all companies make a difference in the world by revolutionising the way they work together, empowering them to reach their full potential.**\n\nWeโ€™re off to a great start too. Teams in some of the worldโ€™s largest enterprise companies are already collaborating with their suppliers through Vizibl and transforming the way they work to drive innovation together.\n\nWe welcome people from all backgrounds who seek the opportunity to help build a future where every company sees the benefit of working openly and collaboratively. If you have the passion, curiosity and collaborative spirit, work with us, and letโ€™s help every company work better, together.\n\nVizibl is a growing SaaS platform used by the worldโ€™s largest organisations to help change the way they work. Our unique blend of Enterprise know-how coupled with our beautiful and usable products is one of the things our customers love about us.\n\nVizibl is looking for a talented Back End Engineer thatโ€™s passionate about building scalable, maintainable, performant backend services that put security first without compromising on our commitment to openness. As Vizibl grows, so do our ambitions for the future of our backend services, which is why this is a great opportunity for the right person to join a talented team to drive exciting new projects that will help change the way the worldโ€™s largest companies work with each other.\n\nThis person will work across our backend services to help maintain our REST api, develop solutions to new problems, be involved in the design and architecture of the platform and work collaboratively to support the growth of the platform. The ideal candidate is a self-motivated person that cares deeply about building excellent products. They donโ€™t settle for OK and have a desire to integrate themselves deeply into the working of the business.\n\nAs this is a fully remote position we'll be looking for strong communication skills and the ability to motivate yourself and your team to work independently.\n\nIf youโ€™re interested in building products that challenge the status quo in the enterprise space and you enjoy an abundance of autonomy with just the right amount of alignment then weโ€™d love to hear from you.\n\n**Open to Everyone**\n\nVizibl is proud to be an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.\n\n**Working for Vizibl you will..**\n* Have a huge amount of autonomy\n* Work remotely\n* Work with cutting edge technologies\n* Manage and support applications in production on our Kubernetes cluster\n* Contribute to the design and architecture and development processes for a system used by the worldโ€™s largest enterprise organisations\n* Be involved in the planning and development of solutions\n* Be an ambassador for our product values\n* Work with an amazing team of people spread out across Europe\n* Contribute to a positive and empowering company culture\n* Help to build and improve a platform used by some of the worlds biggest organisations \n* Get support to grow and develop in your career\n\n**What Youโ€™ll Need**\n* Have experience working in a professional engineering team\n* 3+ years of Python experience (strong candidates with experience in another language may be considered)\n* Experience building production-ready REST APIs\n* Strong skills in information security architecture and security best practices\n* Understanding of data modelling and querying in (Postgres) SQL\n* Experience with Git\n* English fluency and excellent communication skills\n* Experience with TDD/BDD methodologies\n* A desire to learn and improve\n* Be organised and self motivated\n* Be a great team player. Our product squads work very closely together to build solutions \n\n**Weโ€™ll be impressed if**\n* DevOps experience with Docker, Kubernetes, Google Cloud, etc\n* You have experience working in an agile team\n* You have experience working in a remote team\n* You have experience with queuing systems like Celery, Kafka\n* You have worked on products that have been subject to regular security audits\n* You write about back-end technologies\n* You have frontend Javascript experience\n* You have experience architecting complex systems\n* You have experience scaling web applications\n* Youโ€™re familiar with the enterprise project management space\n* Youโ€™ve integrated with large corporate IT environments before\n\n**Benefits**\n* Huge amounts of autonomy\n* Flexible working\n* Work from anywhere\n* Competitive compensation packages\n* Options in a growing SaaS business\n* Work with a great team\n* Great carrer development opportunities\n* Annual retreats \n\n#Salary and compensation\n$60,000 — $80,000/year\n\n\n#Location\n๐Ÿ‡ช๐Ÿ‡บ EU-only


See more jobs at Vizibl

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.

Causal


verified closed
๐ŸŒ Worldwide
 
๐Ÿ’ฐ $60k - $140k

typescript

 

react

 

node

 

postgres

This job post is closed and the position is probably filled. Please do not apply.
**We're building a new way to think and work with numbers.** \n\nCausal is a tool for performing calculations, visualising data, and communicating with numbers ([check it out](https://causal.app)). We take the good parts of spreadsheets and combine them with the good parts of programming, to make number-crunching fast, collaborative, and accessible to everyone. \n\nWe're a small team and well-funded by some great VCs ([Coatue](https://www.crunchbase.com/organization/coatue), [Passion Capital](https://www.passioncapital.com/)) and angel investors ([Naval Ravikant](https://twitter.com/naval), [Scott Belsky](https://twitter.com/scottbelsky), and many more) across the US and Europe.\n\n---\n\nWe're looking for a full-stack engineer to accelerate our product development. As one of our first hires, you'll play a significant role in setting the direction of Causal's product, company, and culture.\n\nOur product primarily consists of a web UI on the frontend and a computation engine on the backend. Causal needs a low floor and a high ceiling โ€”ย it should be simple enough for anyone to get started with, but powerful enough for really complex use-cases.\n\nPerformance is paramount on both the frontend and backend.\n\nFamiliarity with out tech stack is requires (frontend: TypeScript/React, backend: TypeScript/Node/Go).\n\nPlease check out this [link](https://www.notion.so/causal/Full-stack-Engineer-421f869ab09e4307a9011550e3bacced) for more info. \n\n#Salary and compensation\n$60,000 — $140,000/year\n\n\n#Location\n๐ŸŒ Worldwide


See more jobs at Causal

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Jur


verified closed
๐ŸŒ Worldwide
 
๐Ÿ’ฐ $80k - $100k

management

 

crypto

 

ci cd

This job post is closed and the position is probably filled. Please do not apply.
### About Jur\nJur is a decentralized online dispute resolution platform โš–๏ธ\nJoin us in revolutionizing the justice sector to make it more transparent, efficient and open.\nAccess to justice is hypothetical for too many people. Help us make it real for everyone ๐Ÿš€๏ธ\n \n### About the Role\nWe are looking for an experienced chief technology officer to join our mission to improve access to justice.\n \n**Please note**: this will be a hands on role where you will be required to code (e.g. half of your working day). Most of your time will be spent in coordinating the team, reviewing technical requests from the management and making sure that work is on track and that follows best practices.\n \n### Responsibilities\n \nAs a CTO your role will be:\n \n* requirements analysis\n* system design and architecture\n* technical tasks assignment and review\n* manage and monitor cloud infrastructure\n* codebase review\n* full stack development as needed\n* removing any blocker in tech development related activities\n* coordinate with CPO for product releases and delivery\n* contribute to go-to-market technical strategy\n* give walkthroughs and demo to users or potential customers\n* having a good work ethic and integrity to set an example for the team\n* you will be reporting to the CEO and COO\n \n### Skills\n \n* Bachelorโ€™s degree in Computer Science or Computer Engineering (mandatory)\n* 6+ years of tech development experience (mandatory)\n* 2+ years of blockchain industry experience (preferred). A deep understanding of the blockchain industry, tokenomics and decentralized product development would be a strong plus\n* Previously worked on a dApp (preferred)\n* Solid grasp of at least one backend language (Node.js/PHP/Python/Ruby)\n* Solid grasp of a modern UI library (React.js/Vue.js)\n* Knowledge of the Truffle Suite and dApp development lifecycle\n* Vast experience in CI/CD pipelines and architecture at scale (GCP, preferred)\n* Experience in Docker and Kubernetes management\n* Vast experience in unit, integration and regression testing\n* Previous startup experience and strong track record (mandatory)\n* Experience in Agile development and OKRs (mandatory)\n* Experience managing cross-functional and distributed teams\n* Exceptional interpersonal and collaboration skills\n* A positive can-do attitude and sense of urgency\n* Quick learner who loves co-creating with others and building team cohesiveness\n* Understanding of current and future market trends and how blockchain can be utilized\n* Native English speaker (preferred)\n* Availability to travel up to 30% of the time once COVID-19 restrictions are lifted and it is once again safe to meet customers in person and have in-house presentations etc.\n\n### Hiring Process\n \n* We receive and review your application ๐Ÿ”๏ธ\n* If you meet all the mandatory requirements we will send you an assignment โœ…๏ธ\n* If you perform well on the assignment, an interview ๐ŸŽค๏ธ will be scheduled with our management where we evaluate together the cultural fit (occasionally this can have some technical questions too ๐Ÿค–๏ธ)\n* If you pass this last round...congratulations and welcome aboard! ๐ŸŽŠ๏ธ\n \n### Location\n \nFor the entire next year, this position will be almost entirely remote ๐ŸŒ๏ธ\nWe might require to travel from time to time either to meet with the team or for meeting customers so availability to travel is preferred.\nPossible travel locations: San Francisco ๐Ÿ‡บ๐Ÿ‡ธ๏ธ, Bangalore ๐Ÿ‡ฎ๐Ÿ‡ณ๏ธ, Dubai ๐Ÿ‡ฆ๐Ÿ‡ช๏ธ, Europe ๐Ÿ‡ช๐Ÿ‡บ๏ธ. \n\n#Salary and compensation\n$80,000 — $100,000/year\n\n\n#Location\n๐ŸŒ Worldwide


See more jobs at Jur

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nโ†’ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) โ† (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected])\n\n#Location\n๐ŸŒ Worldwide


See more jobs at Splitgraph

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

DataKitchen


verified closed
๐ŸŒ Worldwide
 
๐Ÿ’ฐ $50k - $85k

python

 

docker

 

aws

 

azure

This job post is closed and the position is probably filled. Please do not apply.
Job description\n\nWe are seeking a world-class Manager of Toolchain Software Engineering, whose charter is to create a technical design and build team that can rapidly integrate dozens of tools in DataKitchenโ€™s DataOps platform. There are hundreds of tools that our customers use to do their day to day work: data science, data engineering, data visualization, and governance. We have integrated many of those tools, but our customers are better served by starting with example โ€˜content.โ€™ And for us, that content is Recipes/Pipelines with working tool integrations across the varied toolchains/clouds that our customers and prospects use to do data analytics. We want our customers to start from example content and be doing DataOps on their platform in less than 10 minutes.\n\nThis is your chance to create a team from scratch and build a capability that is essential to our companyโ€™s success. This is a technical role -- we are looking for a person who will code as well hire and manage a team of engineers to do the work. The position demands strong communication, planning, and management abilities. \n\nPRINCIPAL DUTIES & RESPONSIBILITIES\n\nLead and grow the Toolchain Software Engineering organization, building a highly professional and motivated group. \nDeliver example content and integrations with consistently high quality and reliability, in a timely and predictable manner. \nResponsible for the overall toolchain and example life cycle including testing, updates, design, and, open-source sharing, and documentation.\nManagement of departmental resources, staffing, and building a best-of-class engineering team.\nManage customer support issues in order to deliver a timely resolution to their software issues.\n\nESSENTIAL KNOWLEDGE, SKILLS, AND EXPERIENCE \n\nBS or MS in Computer Science or related field\nAt least 3-5 years of development experience building software or software tools\nMinimum of 1 year of experience at the Project Manager or engineering lead position\nExcellent verbal and written communication skills\nTechnical experience in the following areas preferred:\nPython, Docker, SQL, AWS, Azure, or GCP.\nUnderstanding data science, data visualization, data quality, or data integration \nJenkins, DevOps, CI/CD\n\nPERSONALITY TRAITS\n\nLeadership with flexibility and self-motivation โ€“ with a problem solver's attitude. \nHighly effective written and verbal communication skills with a collaborative work style\nCustomer focus, and keen desire to make every customer successful\nAbility to create an open environment conducive to freely sharing information and ideas\n\nOur company is committed to being remote-first, with employees in Cambridge MA, various other states, Buenos Aires Argentina, Italy, and other countries. You must be located within GMT+2 (e.g. Italy) to GMT-8 (e.g. CA). We will not consider candidates outside those time zones. We do not work with recruiters. \n\nDataKitchen is profitable and self-funded and located in Cambridge, MA, USA. \n \n\n#Salary and compensation\n$50,000 — $85,000/year\n\n\n#Location\n๐ŸŒ Worldwide


See more jobs at DataKitchen

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Carb Manager


verified closed
๐ŸŒ Worldwide

agile

 

vue

 

firebase

This job post is closed and the position is probably filled. Please do not apply.
Are you excited by the idea of translating a product vision that can transform lives into elegant, well-architected code?\n\nDo you have the rare mix of extraordinary attention to detail, plus the ability to think holistically about a problem, and make business-informed technical decisions? \n\nAs the Director of Engineering at Carb Manager, you'll be leading a fully remote team of seven highly skilled full-stack and mobile developers, all on a mission to bring the best health management tools to our growing and diverse userbase.\n\nFor this role, we're seeking someone who can serve as a software architect; a liason between the development team and our other orgs; and a mentor and coach to our developers. And we're also looking for someone who can code! We're a tech-driven company, and even our CEO regularly ships code.\n\n**About Carb Manager**\n\n[Carb Manager](https://my.carbmanager.com) is the most popular health & fitness app for people tracking macronutrients and following a low-carb or ketogenic diet. Since 2010, we've helped millions of users lose weight, improve their metabolic health, and become more mindful of what they're eating. Unlike most diet tracking apps, we focus on the balance of macros, not just calories in, calories out!\n\nOur tech stack includes VueJS, Firebase/Firestore, Netlify, GCP, Jest. We communicate on Slack, Clubhouse, Zoom, and all the ususal remote tools.\n\n**About This Role**\n\nThe Director of Engineering is responsible for guiding the architecture of our web and mobile applications, including when shipping new code and while refactoring. This person will also be the lead of the development team, and will provide code reviews, mentorship, and everyday and more formal feedback to our developers.\n\nOn a day-to-day level, this role involves:\n\n- Working alongside the product manager and product designer to translate designs and feature specs into actionable, clearly defined technical specs.\n- Identifying areas in the codebase in need of refactoring and guide developers in improving code quality, applying the appropriate design patterns.\n- Maintaining code standards, including any necessary documentation.\n- Participating in code review of every PR submitted.\n- Leading our stand-up meetings.\n- Providing ongoing and periodic mentoring of the developers, including weekly 1:1 calls and quarterly performance check-ins.\n- Assigingn tasks for each iteration/sprint, and work with the product manager to break down assignments, answer questions, and provide ongoing guidance.\n- Being the technical point of contact for questions from other groups, like operations and marketing.\n- Meeting regularly with senior leadership to steer overarching technical and product strategy.\n- Shipping code!\n\n**What We're Looking For**\n\nA successful candidate will have at least 10 years of total development experience, and at least 5 years of experience in a lead role, preferably in an agile environment. B.S. and/or M.S. in Computer Science, Software Engineering, or another technical field is highly desirable.\n\n**Working At Carb Manager**\n\nCarb Manager is fully remote company with team members all over the world. We're a fun, high-performing team, with a shared passion for enabling healthy transformations.\n\nWe offer:\n\n- Work from anywhere\n- Highly competitive salary\n- Medical and dental insurance\n- 401(k) option\n- 12 paid holidays\n- A generous vacation package (plus personal days as needed)\n- Tech budget to purchase the equipment you need\n- We plan to add yearly in-person gatherings once that becomes possible again. Our next destination is tentatively Lisbon!\n\n#Location\n๐ŸŒ Worldwide


See more jobs at Carb Manager

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply.
**Working at Clevertech**\n\nPeople do their best work when theyโ€™re cared for and in the right environment:\n* RemoteNativeโ„ข: Pioneers in the industry, we are committed to remote work.\n* Flexibility: Wherever your are, and wherever you want to go. We embrace the freedom gained through trust and professionalism.\n* Team: Be part of an amazing team of senior engineers that you can rely on.\n* Growth: Become a master in the art of remote work and effective communication.\n* Compensation: Best in class compensation for remote workers plus the swag you want.\n* Cutting Edge: Stay sharp in your space, work at the very edge of tech.\n* Passion: Annual financial allowance for YOUR development and YOUR passions.\n\n**The Job**\n\n* 7+ years of professional experience (A technical assessment will be required)\n* Senior-level experience with BI, data analytics, or data engineering\n* Vast experience with GCP BigData Tooling: BigQuery, BigTable, etc\n* Experience with reporting, large datasets, and complex queries.\n* Solid knowledge of data integration tools, ETL, data modeling\n* Adtech industry experience a plus\n* English fluency, verbal and written\n* Professional, empathic, team player\n* Problem solver, proactive, go getter\n\n**Life at Clevertech**\n\nWeโ€™re Clevertech, since 2000, we have been building technology through empowered individuals. As a team, we challenge in order to be of service, to deliver growth and drive business for our clients.\n\nOur team is made up of people that are not only from different countries, but also from diverse backgrounds and disciplines. A coordinated team of individuals that care, take on responsibility, and drive change.\n\nhttps://youtu.be/1OKhKatReyg\n\n**Getting Hired**\n\nInterested in exploring your future in this role and Clevertech? Set yourself up for success and take a look at our [Interview Process](https://www.clevertech.biz/thoughts/interviewing-with-clevertech) before getting started! \n\n#Location\nUS, Canada, Europe


See more jobs at Clevertech

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.

Prominent Edge


verified closed
๐Ÿ‡บ๐Ÿ‡ธ US-only

devops

 

aws

 

azure

This job post is closed and the position is probably filled. Please do not apply.
We are looking for a Lead DevOps engineer to join our team at Prominent Edge. We are a small, stable, growing company that believes in doing things right. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want engineers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Many of our projects are web applications which often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at https://prominentedge.com/ for more information and apply through https://prominentedge.com/careers.\n\nRequired skills:\n* Experience as a Lead Engineer.\n* Minimum of 8 years of total experience to include a minimum of 1 years of web or software development experience.\n* Experience automating the provisioning of environments by designing, implementing, and managing configuration and deployment infrastructure as code solutions.\n* Experience delivering scalable solutions utilizing Amazon Web Services: EC2, S3, RDS, Lambda, API Gateway, Message Queues, and CloudFormation Templates.\n* Experience with deploying and administering kubernetes on AWS or GCP or Azure.\n* Capable of designing secure and scalable solutions.\n* Strong nix administration skills.\n* Development in a Linux environment using Bash, Powershell, Python, JS, Go, or Groovy\n* Experience automating and streamlining build, test, and deployment phases for continuous integration\n* Experience with automated deployment technologies such as Ansible, Puppet, or Chef\n* Experience administering automated build environments such as Jenkins and Hudson\n* Experience configuring and deploying logging and monitoring services - fluentd, logstash, GeoHashes, etc.\n* Experience with Git/GitHub/GitLab.\n* Experience with DockerHub or a container registry.\n* Experience with building and deploying containers to a production environment.\n* Strong knowledge of security and recovery from a DevOps perspective.\n\nBonus skills:\n* Experience with RabbitMQ and administration.\n* Experience with kops.\n* Experience with HashiCorp Vault, administration, and Goldfish; frontend Vault UI.\n* Experience with helm for deployment to kubernetes.\n* Experience with CloudWatch.\n* Experience with Ansible and/or a configuration management language.\n* Experience with Ansible Tower; not necessary.\n* Experience with VPNs; OpenVPN preferable.\n* Experience with network administration and understanding network topology and architecture.\n* Experience with AWS spot instances or Google preemptible.\n* Experience with Grafana administration, SSO (okta or jumpcloud preferable), LDAP / Active Directory administration, CloudHealth or cloud cost optimization.\n* Experience with kubernetes-based software - example - heptio/ark, ingress-nginx, anchore engine.\n* Familiarity with the ELK Stack\n* Familiarity with basic administrative tasks and building artifacts on Windows\n* Familiarity with other cloud infrastructures such as Cloud Foundry\n* Strong web or software engineering experience\n* Familiarity with security clearances in case you contribute to our non-commercial projects.\n\nW2 Benefits:\n* Not only you get to join our team of awesome playful ninjas, we also have great benefits:\n* Six weeks paid time off per year (PTO+Holidays).\n* Six percent 401k matching, vested immediately.\n* Free PPO/POS healthcare for the entire family.\n* We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.\n* Want to take time off without using vacation time? Shuffle your hours around in any pay period.\n* Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, weโ€™ll buy you the new version whenever you want.\n* Want some training or to travel to a conference that is relevant to your job? We offer that too!\n* This organization participates in E-Verify.\n\nAbout You:\n* You believe in and practice Agile/DevOps.\n* You are organized and eager to accept responsibility.\n* You want a seat at the table at the inception of new efforts; you do not want things "thrown over the wall" to you.\n* You are an active listener, empathetic and willing to understand and internalize the unique needs and concerns of each individual client.\n* You adjust your speaking style for your audience and can interact successfully with both technical and non-technical clients.\n* You are detail-oriented but never lose sight of the Big Picture.\n* You can work equally well individually or as part of a team.\n* U.S. citizenship required\n\n\n\n\n#Location\n๐Ÿ‡บ๐Ÿ‡ธ US-only


See more jobs at Prominent Edge

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
115ms