FeedbackIf you find a bug, or have feedback, put it here. Please no job applications in here, click Apply on the job instead.Thanks for the message! We will get back to you soon.

[Spam check] What is the name of Elon Musk's company going to Mars?

Send feedback
Open Startup
RSS
API
Health InsurancePost a job

find a remote job
work from anywhere

Get a  email of all new Remote Grafana Jobs

Subscribe
×

πŸ‘‰ Hiring for a Remote Grafana position?

Post a job
on the πŸ† #1 Remote Jobs board

Remote Health by SafetyWing


Global health insurance for freelancers & remote workers


Triplelift is hiring a Remote Senior Cloud Engineer

About TripleLift\n\nTripleLift, one of the fastest-growing ad tech companies in the world, is rooted at the intersection of creative and media. Its mission is to make advertising better for everyoneβ€” content owners, advertisers and consumersβ€”by reinventing ad placement one medium at a time. With direct inventory sources, diverse product lines, and creative designed for scale using our Computer Vision technology, TripleLift is driving the next generation of programmatic advertising from desktop to television.\n\nAs of January 2021, TripleLift has recorded five years of consecutive growth of greater than 70 percent. TripleLift is a Business Insider Hottest Ad Tech Company, Inc. Magazine 5000, Crain's New York Fast 50, Deloitte Technology Fast 500 and among Inc’s Best Workplaces. Find more information about how TripleLift is shaping the future of advertising at triplelift.com.\n\nThe Role\n\nTripleLift is seeking an experienced DevOps engineer to join our team full time. We are a fast-growing startup in the advertising technology sector, trying to tackle some of the most challenging problems facing the industry. As a DevOps engineer, you will be responsible for providing leverage to the engineering team to do the best possible work. This includes managing the infrastructure, working with them to improve their deployment and release process, as well as constantly searching for ways to improve our infrastructure.\n\nCore Technologies\n\nWe employ a wide variety of technologies here at TripleLift to accomplish our goals. From our early days, we’ve always believed in using the right tools for the right job, and continue to explore new technology options as we grow. The DevOps team uses the following technologies at TripleLift:\n\nTools: Chef, Ansible, Terraform, Docker, Kubernetes, CircleCI, Spinnaker, Prometheus, Grafana, Vault, Consul, Snowflake, Airflow, Databricks \nDatabases: AeroSpike, RDS MySQL, Redshift, MongoDB, and more\nLanguages: Java, Python, Node.js, TypeScript, Scala, and more\nAmazon Web Services and Google Cloud (GCP) to keep everything humming\nResponsibilities\n\nCollaborate with the rest of the engineering team to come up with best practices for writing and scaling good code;\nImprove our infrastructure and deployment processes;\nBuild tools that make every engineer more productive;\nWork with each team to optimize their application performance;\nDevelop a unified system for monitor, logging and error handling;\nSearch for industry best practices and use them to drive our team forward.\nWork with teams to optimize and reduce cloud costs;\nDesired Skills and Attributes\n\nSignificant experience in a DevOps or SRE role;\nUnderstanding of container technologies, like Docker and what it takes to containerize applications. \nLoves automation and automating repetitive work;\nUnderstands best practices of application, data, and cloud security;\nUnderstands best practices around building scalable, reliable, and highly available secure infrastructure;\nStrong understanding of cloud networking and network architecture, especially in the context of multi-region applications. \nSkilled in software provisioning, configuration management, and infrastructure automation tools;\nAbility to code well in at least one programming language;\nComfortable taking ownership of projects and showcasing key accomplishments;\nStrives for continued learning opportunities to build upon craft;\nExcellent organizational skills and attention to detail;\nAbility to work quickly and independently with minimal oversight;\nAbility to work under pressure and multitask in a fast-paced start-up environment;\nDesire to accept feedback and constructive criticism;\nExtremely strong and demonstrable work ethic;\nProven academic and/or professional achievement.\nEducation Requirement\n\nA Bachelor’s degree in a technical subject is preferred, although candidates with relevant experience who hold other degrees will be considered.\n\nExperience Requirement\n\nAt least five years of working experience in a professional, collaborative environment.\n\nLocation\n\nNew York or Kitchener-Waterloo preferred, but open to remote candidates\n\nBenefits and Company Perks\n\n100% Medical, Dental & Vision Plans\nUnlimited PTO\n401k, FSA, Commuter Benefits\nWeekly Yoga & Bootcamp\nMembership to Headspace (Meditation)\nOngoing professional development\nAmazing company culture\nNote: The Fair Labor Standards Act (FLSA) is a federal labor law of general and nationwide application, including Overtime, Minimum Wages, Child Labor Protections, and the Equal Pay Act. This role is an FLSA exempt role.\n\nAwards\n\nWe love celebrating our achievements. They remind us of our contributions making advertising work for everyone, and the TripleLifters who make it all possible. TripleLift is proud to be recognized by Inc. as a Best Workplace for our culture and benefits, and among Inc’s Best in Business for our innovations and positive impact on the industry. \n\nTo check out more of our awards and distinctions, please visit https://triplelift.com/ideas/#distinctions\n\nDiversity, Equity, Inclusion and Accessibility at TripleLift \n\nAt TripleLift, we believe in the power of diversity, equity, inclusion and accessibility. Our culture enables individuals to share their uniqueness and contribute as part of a team. With our DEIA initiatives, TripleLift is a place that works for you, and where you can feel a sense of belonging. At TripleLift, we will consider and champion all qualified applicants for employment without regard to race, creed, color, religion, national origin, sex, age, disability, sexual orientation, gender identity, gender expression, genetic predisposition, veteran, marital, or any other status protected by law. TripleLift is proud to be an equal opportunity employer.\n\nTripleLift does not accept unsolicited resumes from any type of recruitment search firm. Any resume submitted in the absence of a signed agreement will become the property of TripleLift and no fee shall be due. \n\n#Salary and compensation\n$120,000 — $200,000/year\n\n\n#Location\nUnited States, Eastern Standard Time Zone


See more jobs at Triplelift

This month's Remote Grafana Jobs

Chainlink Labs



🌏 Worldwide
 
πŸ’° $100k - $200k

dev

 

smart contracts

 

aws

 

terraform terragrunt


Chainlink Labs is hiring a Remote DevOps Engineer

**All roles with Chainlink Labs are globally remote based. We encourage you to apply regardless of your location.**\n\nThe infrastructure team enables Chainlink development and maintains services that support the health of the most widely-adopted oracle network in the world. As a DevOps Engineer, you will help us maintain the Chainlink infrastructure, ensure reliable work of internal and customer-facing services, and empower the entire engineering organization to do their best work.\n\nThis job would be perfect for someone who has a strong operations background and would eventually like to grow into an SRE role. The infrastructure team is expanding, and you would have plenty of opportunities to build up your skillset in different areas.\n\nWe are distributed across time zones and continents, and we embrace remote work. In the Infrastructure team, we follow the infrastructure-as-code approach and practice GitOps. Our on-call rotation uses the follow-the-sun pattern: you will be on call some of the time, but there should not be any overnight shifts.\n\nWe all have different backgrounds and are determined to help you succeed no matter where you are or who you are. If you think you would do a great job at Chainlink, we are looking forward to speaking with you, even if you don't match 100% of the job requirements: those describe people we've usually had a great time working with, but they're not a tick-box exercise.\n\n**Your Impact**\n\n* Maintain full nodes for various blockchains Chainlink supports and find ways to deploy and manage them more efficiently. \n* Deploy new Chainlink nodes and ensure their reliability. \n* Understand blockchain-specific monitoring in great depth and help the team cut down on noise by fine-tuning alerts. \n* Pair with engineers from across the company to help with troubleshooting, deploy new services, and figure out how to increase developer velocity and eliminate pain points.\n\n\n**Requirements**\n\n* 3+ years of relevant professional experience. You probably have an operations background, have worked in a DevOps team before, and are familiar with most tools from our stack (below). \n* Experience with CI/CD. You know how to deploy your services reliably and have used tools like GitHub Actions, CircleCI, TravisCI, or Jenkins to achieve that.\n* Experience with scripting and configuration management. You can write scripts to automate routine tasks and have familiarity with tools like Ansible and Packer. \n* Experience with monitoring and logging. You know how to export metrics to Prometheus, have built a Grafana dashboard or two, and have experience with a centralized logging solution like the Elastic Stack, Splunk or LogDNA.\n* Experience with distributed systems and container orchestration. You have maintained or even built Kubernetes clusters before and feel comfortable deploying completely new services on them.\n* Strong communication skills. You can give and receive constructive feedback, and you do not shy away from planning meetings and code reviews.\n\n**Desired Qualifications**\n\n* Excitement for blockchain, Web 3.0, and similar decentralized technologies. \n* Experience running blockchain full nodes would give you a considerable advantage in this role. \n* Experience with Chainlink as a developer or a node operator is a similarly big plus. \n* Experience with GitHub Actions and self-hosted runners in particular.\n* Experience working remotely in a distributed team.\n* A strong desire to grow and challenge yourself. While this role is mainly focused on maintenance, we would expect you to constantly find ways to improve and automate services under your purview.\n* We are giving slight preference to candidates who live in the UTC to UTC+8 range due to our on-call schedule for this particular opening.\n\n\n**Our Stack**\n\nSome of the tools and services we use daily or almost daily are:\n\nAWS; Terraform/Terragrunt; Kubernetes, Calico and ArgoCD; Prometheus and Grafana; GitHub Actions; Packer\n\nWe expect you to be comfortable with most of those tools and proficient in at least a couple of them.\n\n\n**About Us**\n\nChainlink is the industry standard oracle network for connecting smart contracts to the real world. With Chainlink, developers can build hybrid smart contracts that combine on-chain code with an extensive collection of secure off-chain services powered by Decentralized Oracle Networks. Managed by a global, decentralized community of hundreds of thousands of people, Chainlink is introducing a fairer model for contracts. Its network currently secures billions of dollars in value for smart contracts across the decentralized finance (DeFi), insurance, and gaming ecosystems, among others. The full vision of the Chainlink Network can be found in the [Chainlink 2.0 whitepaper](https://research.chain.link/whitepaper-v2.pdf). Chainlink is trusted by hundreds of organizationsβ€”from global enterprises to projects at the forefront of the blockchain economyβ€”to deliver definitive truth via secure, reliable data. \n\nThis role is location agnostic anywhere in the world, but we ask that you overlap some working hours with Eastern Standard Time (EST).\n\nWe are a fully distributed team and have the tools and benefits to support you in your remote work environment.\n\nChainlink Labs is an Equal Opportunity Employer. \n\n#Salary and compensation\n$100,000 — $200,000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at Chainlink Labs

Chainlink Labs

 This job is getting a pretty high amount of applications right now (13% of viewers clicked Apply)


🌏 Worldwide
 
πŸ’° $100k - $200k

dev

 

smart contracts

 

aws

 

terraform terragrunt


Chainlink Labs is hiring a Remote Site Reliability Engineer

**All roles with Chainlink Labs are globally remote based. We encourage you to apply regardless of your location.**\n\nThe infrastructure team enables Chainlink development and maintains services that support the health of the most widely-adopted oracle network in the world. As a Site Reliability Engineer, you will help us solve some of the unique challenges of blockchain oracle architecture and be primarily responsible for the Chainlink ecosystem's off-chain part.\n\nWe are distributed across time zones and continents, and we embrace remote work. In the Infrastructure team, we follow the infrastructure-as-code approach and practice GitOps. Our on-call rotation uses the follow-the-sun pattern: you will be on call some of the time, but there should not be any overnight shifts.\n\nWe all have different backgrounds and are determined to help you succeed no matter where you are or who you are. If you think you would do a great job at Chainlink, we are looking forward to speaking with you, even if you don't match 100% of the job requirements: those describe people we've usually had a great time working with, but they're not a tick-box exercise. \n\n**Your Impact**\n\n* Support monitoring services that watch over the entire Chainlink network. \n* Deploy and maintain various externally-facing services like reference Chainlink nodes used by developers and customers (including critical services such as Chainlink VRF).\n* Improve the reliability and observability of our internal infrastructure. \n* Provide our engineers with a reliable release pipeline and empower them to release and deploy Chainlink and adjacent tools extremely quickly.\n\n**Requirements**\n\n* 5+ years of relevant professional experience. You have a software engineering background or an operations background and have worked as an SRE (or in a very close position) before.\n* Experience with system architecture. You can create a design document for a cross-region load-balancing app with five microservices, a PostgreSQL cluster, a caching layer, and a Kafka queueβ€”and then implement it on AWS.\n* Experience with CI/CD pipelines. You can troubleshoot an existing pipeline or build your own, and you've probably worked on both software delivery and cloud-based services deployment.\n* Experience with distributed systems and container orchestration. You have built or maintained complex Kubernetes clusters before.\n* Ability to read and write code. You can understand precisely why a recent code change led to degraded performance; you can write scripts and tools to automate routine tasks and eliminate toil.\n* Strong communication skills. You can give and receive constructive feedback, and you do not shy away from planning meetings and code reviews.\n\n**Preferred Qualifications**\n* Professional experience with Golang, TypeScript, or both. \n* Excitement for blockchain, Web 3.0, and similar decentralized technologies. \n* Experience running blockchain full node operator is a big plus. \n* Experience with Chainlink as a developer or a node operator is a big plus.\n* Comfort working with network protocols, proxies, and load balancers.\n* Experience with information security and DevSecOps.\n* Experience working remotely in a distributed team.\n* We are giving slight preference to candidates who live in the UTC to UTC+8 range due to our on-call schedule for this particular opening.\n\n**Our Stack**\n\nSome of the tools and services we use daily or almost daily are:\n\nAWS; Terraform/Terragrunt; Kubernetes, Calico and ArgoCD; Prometheus and Grafana; GitHub Actions; Packer\n\nWe expect you to be comfortable with most of those tools and very proficient in several of them.\n\n**About Us**\n\nChainlink is the industry standard oracle network for connecting smart contracts to the real world. With Chainlink, developers can build hybrid smart contracts that combine on-chain code with an extensive collection of secure off-chain services powered by Decentralized Oracle Networks. Managed by a global, decentralized community of hundreds of thousands of people, Chainlink is introducing a fairer model for contracts. Its network currently secures billions of dollars in value for smart contracts across the decentralized finance (DeFi), insurance, and gaming ecosystems, among others. The full vision of the Chainlink Network can be found in the [Chainlink 2.0 whitepaper](https://research.chain.link/whitepaper-v2.pdf). Chainlink is trusted by hundreds of organizationsβ€”from global enterprises to projects at the forefront of the blockchain economyβ€”to deliver definitive truth via secure, reliable data. \n\nThis role is location agnostic anywhere in the world, but we ask that you overlap some working hours with Eastern Standard Time (EST).\n\nWe are a fully distributed team and have the tools and benefits to support you in your remote work environment.\n\nChainlink Labs is an Equal Opportunity Employer.\n \n\n#Salary and compensation\n$100,000 — $200,000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at Chainlink Labs

Previous Remote Grafana Jobs

This job post is closed and the position is probably filled. Please do not apply.
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nβ†’ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) ← (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected])\n\n#Location\n🌏 Worldwide


See more jobs at Splitgraph

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
116ms