FeedbackIf you find a bug, or have feedback, put it here. Please no job applications in here, click Apply on the job instead.Thanks for the message! We will get back to you soon.

[Spam check] What is the name of Elon Musk's company going to Mars?

Send feedback
Open Startup
RSS
API
Health InsurancePost a job

find a remote job
work from anywhere

Get a  email of all new Remote 🧩 Elasticsearch Jobs

Subscribe
×

👉 Hiring for a Remote 🧩 Elasticsearch position?

Post a job
on the 🏆 #1 Remote Jobs board

Remote Health by SafetyWing


Global health insurance for freelancers & remote workers

Tucows


verified
🌏 Worldwide
 
💰 $100k - $120k

linux

 

python

 

developer


Tucows is hiring a Remote Systems Engineer

We are looking for a Systems Engineer who has strong experience with Linux and Elasticsearch to join our Elastic team.\n\nOur team provides managed Elasticsearch clusters for use by other groups. In addition, we are responsible for deploying virtual infrastructure to our OpenStack private cloud and building and maintaining all the systems used for monitoring, logging, monitoring, and deployment of our systems and services.\n\nYou have a passion for automation and will build and maintain a reusable Elasticsearch as a Service solution for other teams in the company.\n\nYour experience with Elasticsearch and related technologies will allow you to act as a subject matter expert for the teams that are using Elasticsearch.\n\n----------\n\n#About the Role:\n\nWho you are:\n- You have extensive experience working with and troubleshooting Linux-based systems.\n- You have extensive experience maintaining infrastructure via configuration management (SaltStack or similar) and infrastructure as code (Terraform or similar).\n- You have solid experience working with and supporting ElasticSearch.\n- You have a passion and a talent for automation.\n- You have experience working with public or private cloud (OpenStack or similar).\n- You have experience with container orchestration (Nomad or similar).\n- You are comfortable working with Git and GitHub.\n- You are a team player who enjoys learning from others, as well as helping others learn.\n\nWho you might be:\n- You might have experience writing automation tools in Python.\n- You might have experience creating your own Docker images.\n- You might have experience creating or maintaining an in-house "as a Service" solution.\n\nWhat you will do:\n- Maintain the Elastic Team's infrastructure using Terraform and Salt.\n- Provide Elasticsearch clusters "as a Service", via automated deployment pipelines (GitHub Actions).\nEnsure our monitoring system (Prometheus and Grafana) keeps tabs on everything and notifies us when problems arise (or preferably before problems arise!).\n- Be a subject matter expert for Elasticsearch and related technologies (Logstash, Kibana, Filebeat), and guide the teams who are using these systems.\n\n----------\n#About Tucows\nWe do a lot, but at our core, we're in the business of keeping people connected and keeping the Internet open.\n\nAs the second-largest domain registrar in the world by volume (OpenSRS, Enom, EPAG, Ascio and Hover), we help people find their place online.\n\nAs Ting Internet, we bring Crazy Fast Fiber InternetⓇ to communities across the U.S, helping them unlock the power of the Internet.\n\nAs a Mobile Services Enabler (MSE), we force big networks to compete and innovate.\n\nJoin The Herd at https://www.tucows.com/careers/\n\nInvestor info (NASDAQ: TCX, TSX: TC): https://www.tucows.com/investors/\n\n----------\n\nWe offer a competitive compensation and benefits package with invested growth opportunities. So if you are ready to be part of a fast-growing technology company where you determine your future, we want to hear from you.\nWe believe diversity drives innovation. We are committed to inclusion across race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status or disability status. We celebrate multiple approaches and diverse points of view.\n\nWe will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. \n\n#Salary and compensation\n$100,000 — $120,000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at Tucows

# How do you apply?\n\n To apply please click on the link below!
Apply for this job

Previous Remote 🧩 Elasticsearch Jobs

Shopify

 This job is getting a pretty high amount of applications right now (12% of viewers clicked Apply)

verified closed
United States, Canada

dev

 

cloud infrastructure

 

ruby

 

rails

This job post is closed and the position is probably filled. Please do not apply.
**Company Description**\n\nShopify is the leading omni-channel commerce platform. Merchants use Shopify to design, set up, and manage their stores across multiple sales channels, including mobile, web, social media, marketplaces, brick-and-mortar locations, and pop-up shops. The platform also provides merchants with a powerful back-office and a single view of their business, from payments to shipping. The Shopify platform was engineered for reliability and scale, making enterprise-level technology available to businesses of all sizes. Headquartered in Ottawa, Canada, Shopify currently powers over 1,000,000 businesses in approximately 175 countries and is trusted by brands such as Allbirds, Gymshark, PepsiCo, Staples, and many more.\n\nAre you looking for an opportunity to work on planet-scale infrastructure? Do you want your work to impact thousands of developers and millions of customers? Do you enjoy tackling complex problems, and learning through experimentation? Shopify has all this and more.\n\nThe infrastructure teams build and maintain Shopify’s critical infrastructure through software and systems engineering. We make sure Shopify—the world’s fastest growing commerce platform—stays reliable, performant, and scalable for our 2000+ member development team to build on, and our 1.7 million merchants to depend on.\n\n**Job Description**\n\nOur team covers the disciplines of site reliability engineering and infrastructure engineering, all to ensure Shopify’s infrastructure is able to scale massively while staying resilient.\n\nOn our team, you’ll get to work autonomously on engaging projects in an area you’re passionate about. Not sure what interests you most? Here are some of the things you could work on:\n\n* Build on top of one of the largest Kubernetes deployments in Google Cloud (we are operating a fleet of over 50+ clusters)\n* Collaborate with other Shopify developers to understand their needs and ensure our team works on the right things\n* Maintain Shopify’s Heroku-style self-service PaaS for our developers to consolidate over 400 production services\n* Help build our own Database as a Service layers, which include features such as transparent load balancing proxies and automatic failovers, using the current best-of-breed technologies in the area\n* Help develop our caching infrastructure and advise Shopify developers on effective use of the caching layers\n* Build tooling that delights Shopify developers and allows them to make an impact quickly\n* Work as part of the engineering team to build and scale distributed, multi-region systems\n* Investigate and resolve production issues\n* Build languages, frameworks and libraries to support our systems\n* Build Shopify’s predictable, scalable, and high performing full text search infrastructure\n* Build and support infrastructure and tooling to protect our platform from bots and DDoS attacks\n* Autoscale compute up and down based on the demands of the platform, and further protect the platform by shedding lower priority requests as the load gets high\n* And plenty more!\n\n**We also understand the importance of sharing our work back to the developer community:**\n\n* Ghostferry: an open source cross cloud, multipurpose database migration tool and library\n* Services DB: A platform to manage services across various runtime environments\n* Shipit: Our open-source deployment tool\n* Capturing Every Change From Shopify’s Sharded Monolith\n* Read consistency with database replicas\n\n**Qualifications**\nSome of the technology that the team uses: Ruby, Rails, Go, Kubernetes, MySQL, Redis, Memcached, Docker, CI Pipelines, Kafa, ElasticSearch, Google Cloud.\n\nIs some of this tech new to you? That’s OK! We know not everyone will come in fully familiar with this stack, and we provide support to learn on the job.\n\n**Additional information**\n\nOur teams are distributed remotely across North America, and European timezones.\n\nWe know that applying to a new role takes a lot of work and we truly value your time. We’re looking forward to reading your application.\n\nAt Shopify, we are committed to building and fostering an environment where our employees feel included, valued, and heard. Our belief is that a strong commitment to diversity and inclusion enables us to truly make commerce better for everyone. We strongly encourage applications from Indigenous peoples, racialized people, people with disabilities, people from gender and sexually diverse communities and/or people with intersectional identities.\n\n\n\n#Location\nUnited States, Canada


See more jobs at Shopify

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Code Runners

 This job is getting a pretty high amount of applications right now (16% of viewers clicked Apply)

verified closed 
💰 $30k - $50k

net core

 

react

 

angular

 
This job post is closed and the position is probably filled. Please do not apply.
### About Code Runners\n\nCode Runners is a software development company that offers Software and Data Services. Our projects are primarily in the Marketing, Data Analytics and Consumer health domain. We like to be challenged by complex projects with tight deadlines and work together as a team to deal with them. Would you like to join us in our journey to be our best? \n\nWe’re currently looking for .NET developers to join our software development team.\n\n### What are the responsibilities of a .NET developer?\n\n* Analyze and review software requirements\n* Estimate software development tasks\n* Implement components as per described user stories\n* Participate in project planning, reviews and retrospectives\n* Follow agreed upon architecture patterns\n* Provide expertise on software solutions and architecture\n* Develop, run and support unit & integration tests\n* Optimize code, fix existing bugs\n* Continuously work towards higher code quality and proactively suggest improvements to ensure customer success\n\n### What qualifications are required?\n\n* At least 4 years of professional experience in web software development\n* At least 3 years of experience with .NET, at least 1 year of experience with .NET Core\n* At least 2 years of experience with Javascript\n* Education or experience in Computer Science\n* Fluent English (both written and spoken)\n* Interest and ability to learn additional coding languages as needed\n* Excellent analytical skills and attention to details\n* Customer centric, ability to put customer requirements in a wider context\n* Team player, communication, presentation and organizational skills, proactive and outgoing\n* Ability to work accurately and effectively under pressure.\n\n### What would be considered an advantage?\n\n* Experience with Azure equivalent to the AZ-204 certification\n* At least 2 years of experience with Angular 2+ or React\n* At least 1 year of experience with Elasticsearch\n* Experience with frameworks for data visualization, e.g. Highcharts, D3, Plotly\n* Understanding of infrastructure, scalability\n* Experience or interest in the field of DevOps\n* Working knowledge of data algorithms \n\n#Salary and compensation\n$30,000 — $50,000/year\n


See more jobs at Code Runners

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
* * **What we're working on**\n\nEnterprise companies turn to us to help them launch innovative digital products that interact with hundreds of millions of customers, transactions and data points. The problems we solve every day are real and require creativity, grit and determination. We are building a culture that challenges norms while fostering experimentation and personal growth. We are hiring team members who are passionate and energized by the vision of empowering our customers in a complex industry through technology, data and a deep understanding of client concerns. In order to grasp the scale of problems we face, ideally, you have some exposure to Logistics, FinTech, Transportation, Insurance, Media or other complex multifactor industries.\n\n**Your role**\n\nYour role as a Ruby developer at Clevertech will actively contribute to creating software solutions that will set industry standards. You will work alongside some of the best in a collaborative environment while focusing on your core skills. Be a master of your craft while being 100% remote and never have to worry about filling in timesheets.\n\n**Requirements**\n\n- 7+ years of professional experience (A technical assessment will be required)\n- Bachelor’s or Master’s degree in Computer Science or similar technical discipline\n- Experience leading cross-functional development teams in building and maintaining custom software solutions\n- Ability to partner and interact with senior-level management/executives and senior technical teams\n- Strong interpersonal and relationship development skills with the ability to balance product requirements, manage client expectations, and drive your team to effective results.\n- Strong understanding of the agile software development process\n- Excited by ambiguity and rapid changes common in early-stage product development.\n\n**Working at Clevertech**\nPeople do their best work when they’re cared for and in the right environment:\n\n- RemoteNative™: Pioneers in the industry, we are committed to remote work.\n- Flexibility: Wherever you are, and wherever you want to go, we embrace the freedom gained through trust and professionalism.\n- Team: Be part of an amazing team of senior engineers that you can rely on.\n- Growth: Become a master in the art of remote work and effective communication.\n- Compensation: Best in class compensation for remote workers plus the swag you want.\n- Cutting Edge: Stay sharp in your space, work at the very edge of tech.\n- Passion: Annual financial allowance for YOUR development and YOUR passions.\n\n**Getting Hired**\nInterested in exploring your future in this role and Clevertech? Set yourself up for success and take a look at our Interview Process before getting started!\nThe best people in tech just happen to be all over the world. Are you one of them? APPLY NOW \n\n#Salary and compensation\n$60,000 — $100,000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at Clevertech

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
**About IPinfo**\n\nIPinfo is a leading provider of IP address data. Our API handles over 40 billion requests a month, and we also license our data for use in many products and services you might have used. We started as a side project back in 2013, offering a free geolocation API, and we've since bootstrapped ourselves to a profitable business with a global team of 14, and grown our data offerings to include geolocation, IP to company, carrier detection, and VPN detection. Our customers include T-Mobile, Nike, DataDog, DemandBase, Clearbit, and many more.\n\n**How We Work**\n\nWe have a small and ambitious team, spread all over the globe. We sync up on a monthly all-hands Zoom call, and most teams do a call together every 2 weeks. Everything else happens asynchronously, via Slack, GitHub, Linear, and Notion. That means you can pick the hours that work best for you, to allow you to be at your most productive.\n\nTo thrive in this environment you'll need to have high levels of autonomy and ownership. You have to be resourceful and able to work effectively in a remote setup. \n\n**The Role**\n\nWe're looking to add an experienced engineer to our 4-person data team. You'll work on improving our data, maintaining our data pipelines, defining and creating new data sets, and helping us cement our position as an industry leader. Some things we've recently been working on in the data team:\n\n* Building out our global network of probe servers, and doing internet-wide data collection (ping, traceroute, etc).\n* Finding, analyzing, and incorporating existing data sets into our pipeline to improve our quality and accuracy.\n* Building ML models to classify IP address usage as consumer ISP, hosting provider, or business.\n* Inventing and implementing scalable algorithms for IP geolocation and other big data processing.\n\nHere are some of the tools we use. Great if you have experience with these, but if not we'd expect you to ramp up quickly without any problems:\n\n* BigQuery\n* Google Composer / Apache Airflow\n* Python / Bash / SQL\n* ElasticSearch\n\nAny IP address domain knowledge would be useful too, but we can help get you up to speed here:\n* ASN / BGP / CIDR / Ping / Traceroute / Whois etc\n\n**What We Offer**\n\n* 100% remote team and work environment\n* Flexible working hours\n* Minimal meetings\n* Competitive salary\n* Flexible vacation policy\n* Interesting and challenging work \n\n#Salary and compensation\n$90,000 — $140,000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at IPinfo

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply.
At Jungle Scout, we are on a mission to empower entrepreneurs and brands to grow successful e-commerce businesses, and we provide the industry-leading data, powerful tools, and resources they need.\n\nThe role:\n* Do you get excited working with talented engineers, leading them to ship product features and enhancements, and helping them thrive in their career?\n* Do you define a great day as getting sh*t done and having fun working with your team?\n* Do you have a thirst for breaking down complex initiatives into achievable project plans?\n* Do you thrive when you're contributing to a high-performing, humble team?\n\nAmazing, then you’re the type of person we’re looking for!\n\nWe’re growing and we are looking to add a Senior Backend Engineer to the Engineering team focused on building Jungle Scout’s enterprise SaaS product. \n\nWhere would this person be located? Great question! We’re a remote-first company and hope to hire this Senior Software Engineer Team **anywhere between the EST - PST timezone**\n\nInterested in learning more? Let’s get into the details: \n\n**What you will be doing:**\n\nArchitect and build: \n* Highly scalable, fault-tolerant, elastic, and secure services\n* Large scale distributed systems\n* Applications that are a composition of independent services (microservices)\n\nMake recommendations around:\n* Technologies to be used for new services and applications \n* Improvements on existing services and applications\n\nScale, maintain and improve:\n* Existing codebases and system infrastructures\n* Independent services using CI/CD and multiple environments stages (e.g., staging vs. production) to ensure rapid delivery while maintaining high quality and availability\n\nParticipate and contribute in:\n* Leading the technical architecture and delivery of complex projects, interfacing with product, design, and multiple engineering teams\n* Helping product managers with project planning and facilitating the Scrum process\n* Ongoing improvement of engineering practices and procedures\n\n**Who you are:**\n\n* You’ve done this before. \n* You’re an expert with one or more modern programming languages (Ruby, Javascript, Python, Java), technologies, coding, testing, debugging, and automation techniques\n* Have built enterprise-level services with popular backend frameworks (e.g., Ruby on Rails, NodeJS, Spring, Django, Flask, etc)\n* You have experience building data-driven systems that have high availability, optimize for performance, and are highly scalable\n* You’re experienced with modern SQL and NoSQL databases, know when to use each, and can build performant systems on top of each\n* You’re an AWS Cloud Wizard.\n* You’re an AWS cloud ninja and you have experience building cloud-native services at scale\n* Experience working with core AWS services like EC2, RDS, DynamoDB, Elasticsearch, ElasticBeanstalk, Lambda, Cloudwatch, SQS, Kinesis and SNS.\n* You’re a master communicator & passionate mentor. \n* Fluent in both written & verbal English to easily chat with our North American teams. \n* Able to communicate effectively, clearly, and concisely on both technical and non-technical subjects. \n* Take any chance you can get to share knowledge with your team. \n* Contribute to the team’s documentation and mentor teammates in an open, respectful, flexible, empathetic manner. \n* Do not shy away from taking and giving feedback.\n* You’re autonomous. \n* Successfully execute large multi-person projects and well-defined initiatives from definition through to the end.\n\n**Working at Jungle Scout\n**\n[Check us out!](https://www.junglescout.com/jobs/)\n* The BEST team. \n* Remote-first culture.\n* International Retreats.\n* Access to Jungle Scout tools & experts.\n* Performance Bonus. \n* Flexible Vacation. \n* Comprehensive Health Benefits & Retirement Program. \n\n**We prioritize Diversity, Equity, and Inclusion \n**\nAt Jungle Scout, we hire great people from a wide variety of backgrounds, not just because it’s the right thing to do, but because it makes our company stronger. \n\nJungle Scout is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.\n\n*All offers of employment at Jungle Scout are contingent upon clear results of a comprehensive background check. Background checks will be conducted on all final candidates prior to start date.*\n\n#Location\nNorth America and South America


See more jobs at Jungle Scout

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply.
Codelitt is looking for a Senior Full Stack engineer with experience building highly complex applications, using C#, .NET Core, React and GraphQL, to join our team. \n\nYou will work with our client, a commercial real estate firm, to build and implement a solution that helps real estate owners and occupiers see options and context in the market. \n\n#Salary and compensation\n$60,000 — $90,000/year\n\n\n#Location\nAmericas


See more jobs at Codelitt

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\n→ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) ← (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected])\n\n#Location\n🌏 Worldwide


See more jobs at Splitgraph

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
We're looking for a talented Full Stack Engineer that leads by example and gets stuck into everything that touches our product. Come help us shape a product of lasting value for our first and future customers.\n\n### [Cogsy](http://cogsy.com/about/)\n\nWe're building products that will help fast-growing ecommerce brands optimize their purchase decisions and grow even better. We believe in the idea of economies of better (not just economies of scale) and have strong values.\n\nWe're building our initial team and beyond our values, we want diverse, unique individuals to show up as their magical, best selves in the work they do within our team.\n\n### The role\n\nYou will be responsible for creating the early versions of Cogsy's application. This includes but is not limited to:\n\n- Product development\n- API design & development\n- Database and systems administration\n- Metrics / Growth / AB testing\n- 3rd-party integrations\n\nWe expect you to be a generalist with the ability and confidence to work on any part of our stack. These are (some of) the tools that we work with every day:\n\n- NodeJs\n- React\n- Database (MongoDB / PostgreSQL)\n- Aggregation engine (Elasticsearch)\n- Caching (Redis)\n- Async messaging (RabbitMq)\n- Bonus: Python experience\n\nThis is a remote position and you can work from wherever. It is however important that we maintain connectedness as a team and have sufficient time for synchronous work too. We'd prefer team members that are on CET or EST (or +- 1 hour difference) or work on those schedules, as that means that there is 3 / 4 hours overlap for the whole team every day.\n\n### Requirements\n\nIf you were to join Cogsy today, you'd be one of the first team members and can have great influence on the next steps we take.\n\nYou're likely a good fit for this position if you:\n\n- **[Read these values](https://cogsy.com/about/#headline-428-14)** and they resonate with you.\n- Are a true product builder and can make progress both independently and within a team.\n- Can put an infrastructure in place that handles / parses a lot of data.\n- Can move fast and help us ship a first version (that is revenue-ready) in a cost- and time-efficient manner.\n- Have always wanted to build your own team.\n- Take action and pay attention to detail.\n- Have superior communication skills\n- Have professional experience building B2B web applications.\n\n### Salary\n\n7**0-120,000 USD** depending on level of seniority / experience. We'll assess seniority during interview and a brief test project.\n\n### Benefits\n\n- True **flexible** work: work wherever and however you need to work to be at your best **and** ensure you stay connected to the team.\n- Once global travel is open again, we'll do **week-long team retreats** in fun locations. All expenses paid of course.\n- A **Minimum holiday policy**, which basically means you take time off whenever you need it to recharge or attend to other matters. And the team will hold you accountable to taking a minimum amount of time off in any rolling 12-month window.\n- Parental leave for those individuals that plan to discover to joys of having (more) kids.\n- A **health insurance** (powered by [Safety Wing](https://safetywing.com/remote-health)) tailored for remote team members, whether you're at home, traveling or being a nomad.\n- Monthly **learning** and **wellness allowance**. Buy books, pay for your yoga class or get a Calm subscription for greater mindfulness. Whatever helps you develop as an individual and *the best you* is what we'll pay for.\n- We are a **life- and family-first** company that seeks meaningful experiences outside of work and we endeavor to help our customers do the same. \n\n#Salary and compensation\n$70,000 — $120,000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at Cogsy

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Prominent Edge


verified closed
🇺🇸 US-only

devops

 

aws

 

gcp

 

azure

This job post is closed and the position is probably filled. Please do not apply.
We are looking for a Lead DevOps engineer to join our team at Prominent Edge. We are a small, stable, growing company that believes in doing things right. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want engineers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Many of our projects are web applications which often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at https://prominentedge.com/ for more information and apply through https://prominentedge.com/careers.\n\nRequired skills:\n* Experience as a Lead Engineer.\n* Minimum of 8 years of total experience to include a minimum of 1 years of web or software development experience.\n* Experience automating the provisioning of environments by designing, implementing, and managing configuration and deployment infrastructure as code solutions.\n* Experience delivering scalable solutions utilizing Amazon Web Services: EC2, S3, RDS, Lambda, API Gateway, Message Queues, and CloudFormation Templates.\n* Experience with deploying and administering kubernetes on AWS or GCP or Azure.\n* Capable of designing secure and scalable solutions.\n* Strong nix administration skills.\n* Development in a Linux environment using Bash, Powershell, Python, JS, Go, or Groovy\n* Experience automating and streamlining build, test, and deployment phases for continuous integration\n* Experience with automated deployment technologies such as Ansible, Puppet, or Chef\n* Experience administering automated build environments such as Jenkins and Hudson\n* Experience configuring and deploying logging and monitoring services - fluentd, logstash, GeoHashes, etc.\n* Experience with Git/GitHub/GitLab.\n* Experience with DockerHub or a container registry.\n* Experience with building and deploying containers to a production environment.\n* Strong knowledge of security and recovery from a DevOps perspective.\n\nBonus skills:\n* Experience with RabbitMQ and administration.\n* Experience with kops.\n* Experience with HashiCorp Vault, administration, and Goldfish; frontend Vault UI.\n* Experience with helm for deployment to kubernetes.\n* Experience with CloudWatch.\n* Experience with Ansible and/or a configuration management language.\n* Experience with Ansible Tower; not necessary.\n* Experience with VPNs; OpenVPN preferable.\n* Experience with network administration and understanding network topology and architecture.\n* Experience with AWS spot instances or Google preemptible.\n* Experience with Grafana administration, SSO (okta or jumpcloud preferable), LDAP / Active Directory administration, CloudHealth or cloud cost optimization.\n* Experience with kubernetes-based software - example - heptio/ark, ingress-nginx, anchore engine.\n* Familiarity with the ELK Stack\n* Familiarity with basic administrative tasks and building artifacts on Windows\n* Familiarity with other cloud infrastructures such as Cloud Foundry\n* Strong web or software engineering experience\n* Familiarity with security clearances in case you contribute to our non-commercial projects.\n\nW2 Benefits:\n* Not only you get to join our team of awesome playful ninjas, we also have great benefits:\n* Six weeks paid time off per year (PTO+Holidays).\n* Six percent 401k matching, vested immediately.\n* Free PPO/POS healthcare for the entire family.\n* We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.\n* Want to take time off without using vacation time? Shuffle your hours around in any pay period.\n* Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, we’ll buy you the new version whenever you want.\n* Want some training or to travel to a conference that is relevant to your job? We offer that too!\n* This organization participates in E-Verify.\n\nAbout You:\n* You believe in and practice Agile/DevOps.\n* You are organized and eager to accept responsibility.\n* You want a seat at the table at the inception of new efforts; you do not want things "thrown over the wall" to you.\n* You are an active listener, empathetic and willing to understand and internalize the unique needs and concerns of each individual client.\n* You adjust your speaking style for your audience and can interact successfully with both technical and non-technical clients.\n* You are detail-oriented but never lose sight of the Big Picture.\n* You can work equally well individually or as part of a team.\n* U.S. citizenship required\n\n\n\n\n#Location\n🇺🇸 US-only


See more jobs at Prominent Edge

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

InReach Ventures

 This job is getting a pretty high amount of applications right now (11% of viewers clicked Apply)

verified closed
UK or Italy
 
💰 $55k - $70k

java

 

python

 

aws

 

docker

This job post is closed and the position is probably filled. Please do not apply.
InReach is changing how VC in Europe works, for good. Through data, software and Machine Learning, we are building an in-house platform to help us find, reach-out to and invest in early-stage European startups, regardless of the city or country they’re based in.\n\nWe are looking for a back-end developer to continue the development of InReach’s data services. This involves: \n* Cleaning / wrangling / merging / processing the data on companies and founders from across Europe\n* Building data pipelines with the Machine Learning engineers\n* Building APIs to support front-end investment product used by the Investment team (named DIG)\n\nThis role will involve working across the stack. From DevOps (Terraform) to web scraping and Machine Learning (Python) all the way to data pipelines and web-services (Java) and getting stuck into the front-end (Javascript). It’s a great opportunity to hone your skills and master some new ones.\n\nIt is important to us that candidates be passionate about helping entrepreneurs and startups. This is our bread-and-butter and we want you to be involved.\n\nInReach is a remote-first employer and we are looking to this hire to help us become an exceptional place to work for remote employees. Whether you are in the office or remote, we are looking for people with excellent written and verbal communication skills.\n\n### Background Reading:\n* [InReach Ventures, the 'AI-powered' European VC, closes new €53M fund](https://techcrunch.com/2019/02/11/inreach-ventures-the-ai-powered-european-vc-closes-new-e53m-fund/?guccounter=1)\n* [The Full-Stack Venture Capital](https://medium.com/entrepreneurship-at-work/the-full-stack-venture-capital-8a5cffe4d71)\n* [Roberto Bonanzinga starts InReach Ventures with DIG platform](https://www.businessinsider.com/roberto-bonanzinga-starts-inreach-ventures-with-dig-platform-2015-11?r=US&IR=T)\n* [Exceptional Communication Guidelines](https://www.craft.do/s/Isrjt4KaHMPQ)\n\n## Responsibilities\n\n* Creatively and quickly coming up with effective solutions to undefined problems\n* Choosing technology that is modern but not hype-driven\n* Developing features and tests quickly with good, clean code\n* Being part of the wider development team, reviewing code and participating in architecture from across the stack\n* Communicating exceptionally, both asynchronously (written) and synchronously (spoken)\n* Helping to shape InReach as a remote-first organization\n\n## Technologies\n\nGiven that this position touches so much of the stack, it will be difficult for a candidate that only has experience in Python or only in Java to be successful in being effective quickly. While we expect the candidate to be stronger in one or the other, some professional exposure is required.\n\nIn addition to the programming skills and the ability to write well designed and tested code, infrastructure within modern cloud platforms and sound architectural reasoning are expected.\n\nNone of these are a prerequisite, but help:\n* Functional Programming\n* Reactive Streams (RxJava2)\n* Terraform\n* Postgres\n* ElasticSearch\n* SQS\n* Dynamodb\n* AWS Lambda\n* Docker\n* Dropwizard\n* Maven\n* Pipenv\n* Javascript\n* React\n* NodeJS\n\n## Interview Process\n* 15m video chat with Ben, CTO to find out more about InReach and the role\n* 2h data pipeline technical test (Python)\n* 2h web service technical test (Java)\n* 30m architectural discussion with Ben, talking through the work you did\n* 2h interview with the different team members from across InReach. We’re a small company so it’s important we see how we’ll all work together - not just the tech team!\n \n\n#Salary and compensation\n$55,000 — $70,000/year\n\n\n#Location\nUK or Italy


See more jobs at InReach Ventures

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Muck Rack

 This job is getting a pretty high amount of applications right now (15% of viewers clicked Apply)

closed
United States, Canada, Poland, Bulgaria

python

 

django

 

sql

This job post is closed and the position is probably filled. Please do not apply.
Muck Rack’s engineering team powers a platform that is meaningfully changing how journalists, PR pros, and marketers around the world work. Self-funded, globally distributed, and remote-first since our founding, Crain’s named Muck Rack one of the “Best Places to Work in NYC” in 2019. We’re looking for a collaborative and self-motivated Senior Software Engineer to join our small but quickly growing team, and make a big impact. \n\nAs a Senior Software Engineer you’ll work alongside the CTO, fellow software engineers, product managers, and designers, to execute major technical projects on Muck Rack, lead the building of new features, and help shape our engineering culture and processes. Our engineers are not siloed to any particular part of the application–everyone contributes everywhere. You should be excited about working with large amounts of data. Our tech stack includes Python, Django, Celery, MySQL, Elasticsearch, Vue, and Webpack. Our technology team is focused on scale, quality, delivery, and thoughtful customer experience. We ship frequently without sacrificing work/life balance.\n\n**If the details below describe you, you could be a great fit for this role:**\n* 3+ years professional experience as a software engineer\n* Excellent communication skills, with an ability to explain your ideas clearly, give and receive feedback, and work well with team members\n* Worked on a complex, high-traffic site at a startup or software-as-a-service company, ideally with large amounts of data\n* Solid experience with Django, Python, MySQL (or Postgres) and other software in our tech stack, and a willingness to learn in areas where you have less experience\n* Familiarity with modern frontend frameworks (like Vue or React) and development patterns\n* Any experience running Elasticsearch at large scale would be a bonus\n* Any combination of the following would also be a bonus: experience with Celery, Luigi or Airflow, Kafka, AWS, NLP, data model performance tuning, content extraction, application performance tuning\n* Take pride in the quality of your code. Your code is readable, testable, and understandable years later. You adhere to the Zen of Python\n* Work well in a fast-paced development environment with testing, continuous integration and multiple daily deploys\n* Ability to manage complexity in a large project, and incur technical debt only after considering the tradeoffs\n* Have a logical approach to problem solving that combines analytical thinking and intuition\n* Interest in journalism, news, media or social media\n\nWe encourage candidates of all backgrounds and experiences to apply. We understand job requirements often don't allow your particular work history to shine, and we invite you to show us what you know, and how it relates to our technology. We are an equal opportunity employer.\n\n#Location\nUnited States, Canada, Poland, Bulgaria


See more jobs at Muck Rack

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

anwalt.de services AG


closed
Europe (Mesz Preferred)

zend framework

 

mysql

 

aws

This job post is closed and the position is probably filled. Please do not apply.
**YOUR TASKS**\n* work in Scrum-Like teams with the goal of making access to legal consultation a step easier every day\n* you are working on the backend of our core platform www.anwalt.de (PHP7) - both on our internal area as well as our public website\n* actively help shaping our platform in the right direction and take part in product discussions\n* (plus) you are not afraid of frontend and are willing to jump in to do minor adjustments \n\n# Requirements\n**YOU**\n* have at least three years of experience in PHP environments, in base case with Zend Framework\n* have already worked on bigger code bases and in complex environments\n* are familiar with Design Patterns and software engineering best practices\n* have at least basic knowledge of automated software testing with unit, functional or acceptance test\n\n#Location\nEurope (Mesz Preferred)


See more jobs at anwalt.de services AG

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
132ms