Open Startup
RSS
API
Remote HealthPost a job

find a remote job
work from anywhere

Get a  email of all new Remote Azure + Python Jobs

Subscribe
×

πŸ‘‰ Hiring for a Remote Azure + Python position?

Post a job
on the πŸ† #1 Remote Jobs board

Remote Health by SafetyWing


Global health insurance for freelancers & remote workers

Remote Health by SafetyWing


Global health insurance for freelancers & remote workers

Shopify


This position is a Remote OK original posting verified closed
United States, Canada

Senior Data Scientist


Shopify

United States, CanadaOriginally posted on Remote OK

data science

 

senior

 

data scientist

 

data science

 

senior

 

data scientist

 
This job post is closed and the position is probably filled. Please do not apply.
**Company Description**\n\nShopify is now permanently remote and working towards a future that is digital by default. Learn more about what this can mean for you.\n\nAt Shopify, we build products that help entrepreneurs around the world start and grow their business. We’re the world’s fastest growing commerce platform with over 1 million merchants in more than 175 different countries, with solutions from point-of-sale and online commerce to financial, shipping logistics and marketing.\n\n**Job Description**\n\nData is a crucial part of Shopify’s mission to make commerce better for everyone. We organize and interpret petabytes of data to provide solutions for our merchants and stakeholders across the organization. From pipelines and schema design to machine learning products and decision support, data science at Shopify is a diverse role with many opportunities to positively impact our success. \n\nOur Data Scientists focus on pushing products and the business forward, with a focus on solving important problems rather than specific tools. We are looking for talented data scientists to help us better understand our merchants and buyers so we can help them on their journey.\n\n**Responsibilities:**\n\n* Proactively identify and champion projects that solve complex problems across multiple domains\n* Partner closely with product, engineering and other business leaders to influence product and program decisions with data\n* Apply specialized skills and fundamental data science methods (e.g. regression, survival analysis, segmentation, experimentation, and machine learning when needed) to inform improvements to our business\n* Design and implement end-to-end data pipelines: work closely with stakeholders to build instrumentation and define dimensional models, tables or schemas that support business processes\n* Build actionable KPIs, production-quality dashboards, informative deep dives, and scalable data products\n* Influence leadership to drive more data-informed decisions\n* Define and advance best practices within data science and product teams\n\n**Qualifications**\n\n* 4-6 years of commercial experience as a Data Scientist solving high impact business problems\n* Extensive experience with Python and software engineering fundamentals\n* Experience with applied statistics and quantitative modelling (e.g. regression, survival analysis, segmentation, experimentation, and machine learning when needed)\n* Demonstrated ability to translate analytical insights into clear recommendations and effectively communicate them to technical and non-technical stakeholders\n* Curiosity about the problem domain and an analytical approach\n* Strong sense of ownership and growth mindset\n \n**Experience with one or more:**\n\n* Deep understanding of advanced SQL techniques\n* Expertise with statistical techniques and their applications in business\n* Masterful data storytelling and strategic thinking\n* Deep understanding of dimensional modelling and scaling ETL pipelines\n* Experience launching productionized machine learning models at scale\n* Extensive domain experience in e-commerce, marketing or SaaS\n\n**Additional information**\n\nAt Shopify, we are committed to building and fostering an environment where our employees feel included, valued, and heard. Our belief is that a strong commitment to diversity and inclusion enables us to truly make commerce better for everyone. We strongly encourage applications from Indigenous people, racialized people, people with disabilities, people from gender and sexually diverse communities and/or people with intersectional identities. Please take a look at our 2019 Sustainability Report to learn more about Shopify's commitments.\n\n#Location\nUnited States, Canada


See more jobs at Shopify

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
# We're building the Data Platform of the Future\nJoin us if you want to rethink the way organizations interact with data. We are a **developer-first company**, committed to building around open protocols and delivering the best experience possible for data consumers and publishers.\n\nSplitgraph is a **seed-stage, venture-funded startup hiring its initial team**. The two co-founders are looking to grow the team to five or six people. This is an opportunity to make a big impact on an agile team while working closely with the\nfounders.\n\nSplitgraph is a **remote-first organization**. The founders are based in the UK, and the company is incorporated in both USA and UK. Candidates are welcome to apply from any geography. We want to work with the most talented, thoughtful and productive engineers in the world.\n# Open Positions\n**Data Engineers welcome!** The job titles have "Software Engineer" in them, but at Splitgraph there's a lot of overlap \nbetween data and software engineering. We welcome candidates from all engineering backgrounds.\n\n[Senior Software Engineer - Backend (mainly Python)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Backend-2a2f9e278ba347069bf2566950857250)\n\n[Senior Software Engineer - Frontend (mainly TypeScript)](https://www.notion.so/splitgraph/Senior-Software-Engineer-Frontend-6342cd76b0df483a9fd2ab6818070456)\n\nβ†’ [**Apply to Job**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp) ← (same form for both positions)\n\n# What is Splitgraph?\n## **Open Source Toolkit**\n\n[Our open-source product, sgr,](https://www.github.com/splitgraph/splitgraph) is a tool for building, versioning and querying reproducible datasets. It's inspired by Docker and Git, so it feels familiar. And it's powered by PostgreSQL, so it works seamlessly with existing tools in the Postgres ecosystem. Use Splitgraph to package your data into self-contained\ndata images that you can share with other Splitgraph instances.\n\n## **Splitgraph Cloud**\n\nSplitgraph Cloud is a platform for data cataloging, integration and governance. The user can upload data, connect live databases, or "push" versioned snapshots to it. We give them a unified SQL interface to query that data, a catalog to discover and share it, and tools to build/push/pull it.\n\n# Learn More About Us\n\n- Listen to our interview on the [Software Engineering Daily podcast](https://softwareengineeringdaily.com/2020/11/06/splitgraph-data-catalog-and-proxy-with-miles-richardson/)\n\n- Watch our co-founder Artjoms present [Splitgraph at the Bay Area ClickHouse meetup](https://www.youtube.com/watch?v=44CDs7hJTho)\n\n- Read our HN/Reddit posts ([one](https://news.ycombinator.com/item?id=24233948) [two](https://news.ycombinator.com/item?id=23769420) [three](https://news.ycombinator.com/item?id=23627066) [four](https://old.reddit.com/r/datasets/comments/icty0r/we_made_40k_open_government_datasets_queryable/))\n\n- [Read our blog](https://www.splitgraph.com/blog)\n\n- Read the slides from our early (2018) presentations: ["Docker for Data"](https://www.slideshare.net/splitgraph/splitgraph-docker-for-data-119112722), [AHL Meetup](https://www.slideshare.net/splitgraph/splitgraph-ahl-talk)\n\n- [Follow us on Twitter](https://ww.twitter.com/splitgraph)\n\n- [Find us on GitHub](https://www.github.com/splitgraph)\n\n- [Chat with us in our community Discord](https://discord.gg/eFEFRKm)\n\n- Explore the [public data catalog](https://www.splitgraph.com/explore) where we index 40k+ datasets\n\n# How We Work: What's our stack look like?\n\nWe prioritize developer experience and productivity. We resent repetition and inefficiency, and we never hesitate to automate the things that cause us friction. Here's a sampling of the languages and tools we work with:\n\n- **[Python](https://www.python.org/) for the backend.** Our [core open source](https://www.github.com/splitgraph/splitgraph) tech is written in Python (with [a bit of C](https://github.com/splitgraph/Multicorn) to make it more interesting), as well as most of our backend code. The Python code powers everything from authentication routines to database migrations. We use the latest version and tools like [pytest](https://docs.pytest.org/en/stable/), [mypy](https://github.com/python/mypy) and [Poetry](https://python-poetry.org/) to help us write quality software.\n\n- **[TypeScript](https://www.typescriptlang.org/) for the web stack.** We use TypeScript throughout our web stack. On the frontend we use [React](https://reactjs.org/) with [next.js](https://nextjs.org/). For data fetching we use [apollo-client](https://www.apollographql.com/docs/react/) with fully-typed GraphQL queries auto-generated by [graphql-codegen](https://graphql-code-generator.com/) based on the schema that [Postgraphile](https://www.graphile.org/postgraphile) creates by introspecting the database.\n\n- [**PostgreSQL](https://www.postgresql.org/) for the database, because of course.** Splitgraph is a company built around Postgres, so of course we are going to use it for our own database. In fact, we actually have three databases. We have `auth-db` for storing sensitive data, `registry-db` which acts as a [Splitgraph peer](https://www.splitgraph.com/docs/publishing-data/push-data) so users can push Splitgraph images to it using [sgr](https://www.github.com/splitgraph/splitgraph), and `cloud-db` where we store the schemata that Postgraphile uses to autogenerate the GraphQL server.\n\n- [**PL/pgSQL](https://www.postgresql.org/docs/current/plpgsql.html) and [PL/Python](https://www.postgresql.org/docs/current/plpython.html) for stored procedures.** We define a lot of core business logic directly in the database as stored procedures, which are ultimately [exposed by Postgraphile as GraphQL endpoints](https://www.graphile.org/postgraphile/functions/). We find this to be a surprisingly productive way of developing, as it eliminates the need for manually maintaining an API layer between data and code. It presents challenges for testing and maintainability, but we've built tools to help with database migrations and rollbacks, and an end-to-end testing framework that exercises the database routines.\n\n- [**PostgREST](https://postgrest.org/en/v7.0.0/) for auto-generating a REST API for every repository.** We use this excellent library (written in [Haskell](https://www.haskell.org/)) to expose an [OpenAPI](https://github.com/OAI/OpenAPI-Specification)-compatible REST API for every repository on Splitgraph ([example](http://splitgraph.com/mildbyte/complex_dataset/latest/-/api-schema)).\n\n- **Lua ([luajit](https://luajit.org/luajit.html) 5.x), C, and [embedded Python](https://docs.python.org/3/extending/embedding.html) for scripting [PgBouncer](https://www.pgbouncer.org/).** Our main product, the "data delivery network", is a single SQL endpoint where users can query any data on Splitgraph. Really it's a layer of PgBouncer instances orchestrating temporary Postgres databases and proxying queries to them, where we load and cache the data necessary to respond to a query. We've added scripting capabilities to enable things like query rewriting, column masking, authentication, ACL, orchestration, firewalling, etc.\n\n- **[Docker](https://www.docker.com/) for packaging services.** Our CI pipeline builds every commit into about a dozen different Docker images, one for each of our services. A production instance of Splitgraph can be running over 60 different containers (including replicas).\n\n- **[Makefile](https://www.gnu.org/software/make/manual/make.html) and** [docker-compose](https://docs.docker.com/compose/) **for development.** We use [a highly optimized Makefile](https://www.splitgraph.com/blog/makefile) and `docker-compose` so that developers can easily spin-up a stack that mimics production in every way, while keeping it easy to hot reload, run tests, or add new services or configuration.\n\n- **[Nomad](https://www.nomadproject.io/) for deployment and [Terraform](https://www.terraform.io/) for provisioning.** We use Nomad to manage deployments and background tasks. Along with Terraform, we're able to spin up a Splitgraph cluster on AWS, GCP, Scaleway or Azure in just a few minutes.\n\n- **[Airflow](https://airflow.apache.org/) for job orchestration.** We use it to run and monitor jobs that maintain our catalog of [40,000 public datasets](https://www.splitgraph.com/blog/40k-sql-datasets), or ingest other public data into Splitgraph.\n\n- **[Grafana](https://grafana.com/), [Prometheus](https://prometheus.io/), [ElasticSearch](https://www.elastic.co/), and [Kibana](https://www.elastic.co/kibana) for monitoring and metrics.** We believe it's important to self-host fundamental infrastructure like our monitoring stack. We use this to keep tabs on important metrics and the health of all Splitgraph deployments.\n\n- **[Mattermost](https://mattermost.com/) for company chat.** We think it's absolutely bonkers to pay a company like Slack to hold your company communication hostage. That's why we self-host an instance of Mattermost for our internal chat. And of course, we can deploy it and update it with Terraform.\n\n- **[Matomo](https://matomo.org/) for web analytics.** We take privacy seriously, and we try to avoid including any third party scripts on our web pages (currently we include zero). We self-host our analytics because we don't want to share our user data with third parties.\n\n- **[Metabase](https://www.metabase.com/) and [Splitgraph](https://www.splitgraph.com) for BI and [dogfooding](https://en.wikipedia.org/wiki/Eating_your_own_dog_food)**. We use Metabase as a frontend to a Splitgraph instance that connects to Postgres (our internal databases), MySQL (Matomo's database), and ElasticSearch (where we store logs and DDN analytics). We use this as a chance to dogfood our software and produce fancy charts.\n\n- **The occasional best-of-breed SaaS services** **for organization.** As a privacy-conscious, independent-minded company, we try to avoid SaaS services as much as we can. But we still find ourselves unable to resist some of the better products out there. For organization we use tools like [Zoom](https://www.zoom.us) for video calls, [Miro](https://miro.com/) for brainstorming, [Notion](https://www.notion.so) for documentation (you're on it!), [Airtable for workflow management](https://airtable.com/), [PivotalTracker](https://www.pivotaltracker.com/) for ticketing, and [GitLab for dev-ops and CI](https://about.gitlab.com/).\n\n- **Other fun technologies** including [HAProxy](http://www.haproxy.org/), [OpenResty](https://openresty.org/en/), [Varnish](https://varnish-cache.org/), and bash. We don't touch them much because they do their job well and rarely break.\n\n# Life at Splitgraph\n**We are a young company building the initial team.** As an early contributor, you'll have a chance to shape our initial mission, growth and company values.\n\n**We think that remote work is the future**, and that's why we're building a remote-first organization. We chat on [Mattermost](https://mattermost.com/) and have video calls on Zoom. We brainstorm with [Miro](https://miro.com/) and organize with [Notion](https://www.notion.so).\n\n**We try not to take ourselves too seriously**, but we are goal-oriented with an ambitious mission.\n\n**We believe that as a small company, we can out-compete incumbents** by thinking from first principles about how organizations interact with data. We are very competitive.\n\n# Benefits\n- Fully remote\n\n- Flexible working hours\n\n- Generous compensation and equity package\n\n- Opportunity to make high-impact contributions to an agile team\n\n# How to Apply? Questions?\n[**Complete the job application**](https://4o99daw6ffu.typeform.com/to/ePkNQiDp)\n\nIf you have any questions or concerns, feel free to email us at [[email protected]](mailto:[email protected])\n\n#Location\n🌏 Worldwide


See more jobs at Splitgraph

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

DataKitchen


This position is a Remote OK original posting verified closed
🌏 Worldwide

Manager Toolchain Software Engineering


DataKitchen

🌏 WorldwideOriginally posted on Remote OK

docker

 

aws

 

docker

 

aws

 
This job post is closed and the position is probably filled. Please do not apply.
Job description\n\nWe are seeking a world-class Manager of Toolchain Software Engineering, whose charter is to create a technical design and build team that can rapidly integrate dozens of tools in DataKitchen’s DataOps platform. There are hundreds of tools that our customers use to do their day to day work: data science, data engineering, data visualization, and governance. We have integrated many of those tools, but our customers are better served by starting with example β€˜content.’ And for us, that content is Recipes/Pipelines with working tool integrations across the varied toolchains/clouds that our customers and prospects use to do data analytics. We want our customers to start from example content and be doing DataOps on their platform in less than 10 minutes.\n\nThis is your chance to create a team from scratch and build a capability that is essential to our company’s success. This is a technical role -- we are looking for a person who will code as well hire and manage a team of engineers to do the work. The position demands strong communication, planning, and management abilities. \n\nPRINCIPAL DUTIES & RESPONSIBILITIES\n\nLead and grow the Toolchain Software Engineering organization, building a highly professional and motivated group. \nDeliver example content and integrations with consistently high quality and reliability, in a timely and predictable manner. \nResponsible for the overall toolchain and example life cycle including testing, updates, design, and, open-source sharing, and documentation.\nManagement of departmental resources, staffing, and building a best-of-class engineering team.\nManage customer support issues in order to deliver a timely resolution to their software issues.\n\nESSENTIAL KNOWLEDGE, SKILLS, AND EXPERIENCE \n\nBS or MS in Computer Science or related field\nAt least 3-5 years of development experience building software or software tools\nMinimum of 1 year of experience at the Project Manager or engineering lead position\nExcellent verbal and written communication skills\nTechnical experience in the following areas preferred:\nPython, Docker, SQL, AWS, Azure, or GCP.\nUnderstanding data science, data visualization, data quality, or data integration \nJenkins, DevOps, CI/CD\n\nPERSONALITY TRAITS\n\nLeadership with flexibility and self-motivation – with a problem solver's attitude. \nHighly effective written and verbal communication skills with a collaborative work style\nCustomer focus, and keen desire to make every customer successful\nAbility to create an open environment conducive to freely sharing information and ideas\n\nOur company is committed to being remote-first, with employees in Cambridge MA, various other states, Buenos Aires Argentina, Italy, and other countries. You must be located within GMT+2 (e.g. Italy) to GMT-8 (e.g. CA). We will not consider candidates outside those time zones. We do not work with recruiters. \n\nDataKitchen is profitable and self-funded and located in Cambridge, MA, USA. \n \n\n#Salary and compensation\n$50,000 — $85,000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at DataKitchen

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Prominent Edge


This position is a Remote OK original posting verified closed
πŸ‡ΊπŸ‡Έ US-only

Lead DevOps Engineer


Prominent Edge

πŸ‡ΊπŸ‡Έ US-onlyOriginally posted on Remote OK

devops

 

aws

 

gcp

 

devops

 

aws

 

gcp

 
This job post is closed and the position is probably filled. Please do not apply.
We are looking for a Lead DevOps engineer to join our team at Prominent Edge. We are a small, stable, growing company that believes in doing things right. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want engineers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Many of our projects are web applications which often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at https://prominentedge.com/ for more information and apply through https://prominentedge.com/careers.\n\nRequired skills:\n* Experience as a Lead Engineer.\n* Minimum of 8 years of total experience to include a minimum of 1 years of web or software development experience.\n* Experience automating the provisioning of environments by designing, implementing, and managing configuration and deployment infrastructure as code solutions.\n* Experience delivering scalable solutions utilizing Amazon Web Services: EC2, S3, RDS, Lambda, API Gateway, Message Queues, and CloudFormation Templates.\n* Experience with deploying and administering kubernetes on AWS or GCP or Azure.\n* Capable of designing secure and scalable solutions.\n* Strong nix administration skills.\n* Development in a Linux environment using Bash, Powershell, Python, JS, Go, or Groovy\n* Experience automating and streamlining build, test, and deployment phases for continuous integration\n* Experience with automated deployment technologies such as Ansible, Puppet, or Chef\n* Experience administering automated build environments such as Jenkins and Hudson\n* Experience configuring and deploying logging and monitoring services - fluentd, logstash, GeoHashes, etc.\n* Experience with Git/GitHub/GitLab.\n* Experience with DockerHub or a container registry.\n* Experience with building and deploying containers to a production environment.\n* Strong knowledge of security and recovery from a DevOps perspective.\n\nBonus skills:\n* Experience with RabbitMQ and administration.\n* Experience with kops.\n* Experience with HashiCorp Vault, administration, and Goldfish; frontend Vault UI.\n* Experience with helm for deployment to kubernetes.\n* Experience with CloudWatch.\n* Experience with Ansible and/or a configuration management language.\n* Experience with Ansible Tower; not necessary.\n* Experience with VPNs; OpenVPN preferable.\n* Experience with network administration and understanding network topology and architecture.\n* Experience with AWS spot instances or Google preemptible.\n* Experience with Grafana administration, SSO (okta or jumpcloud preferable), LDAP / Active Directory administration, CloudHealth or cloud cost optimization.\n* Experience with kubernetes-based software - example - heptio/ark, ingress-nginx, anchore engine.\n* Familiarity with the ELK Stack\n* Familiarity with basic administrative tasks and building artifacts on Windows\n* Familiarity with other cloud infrastructures such as Cloud Foundry\n* Strong web or software engineering experience\n* Familiarity with security clearances in case you contribute to our non-commercial projects.\n\nW2 Benefits:\n* Not only you get to join our team of awesome playful ninjas, we also have great benefits:\n* Six weeks paid time off per year (PTO+Holidays).\n* Six percent 401k matching, vested immediately.\n* Free PPO/POS healthcare for the entire family.\n* We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.\n* Want to take time off without using vacation time? Shuffle your hours around in any pay period.\n* Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, we’ll buy you the new version whenever you want.\n* Want some training or to travel to a conference that is relevant to your job? We offer that too!\n* This organization participates in E-Verify.\n\nAbout You:\n* You believe in and practice Agile/DevOps.\n* You are organized and eager to accept responsibility.\n* You want a seat at the table at the inception of new efforts; you do not want things "thrown over the wall" to you.\n* You are an active listener, empathetic and willing to understand and internalize the unique needs and concerns of each individual client.\n* You adjust your speaking style for your audience and can interact successfully with both technical and non-technical clients.\n* You are detail-oriented but never lose sight of the Big Picture.\n* You can work equally well individually or as part of a team.\n* U.S. citizenship required\n\n\n\n\n#Location\nπŸ‡ΊπŸ‡Έ US-only


See more jobs at Prominent Edge

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Toptal


This position is a Remote OK original posting verified closed
🌏 Worldwide

Senior Developer


Toptal

🌏 WorldwideOriginally posted on Remote OK

front end

 

back end

 

app

 

front end

 

back end

 

app

 

devops

This job post is closed and the position is probably filled. Please do not apply.
***Design your lifestyle as a top freelance developer, with the freedom to work however, wherever, on your terms. ***\n\nFreelance work is defining the careers of today’s developers in exciting new ways. If you’re passionate about working flexibly with leading Fortune 500 brands and innovative Silicon Valley startups, Toptal could be a great fit for your next career shift.\n\nToptal is an elite talent network for the world’s top 3% of developers, connecting the best and brightest freelancers with top organizations. Unlike a 9-to-5 job, you’ll choose your own schedule and work from anywhere. **Jobs come to you, so you won’t bid for projects against other developers in a race to the bottom. **Plus, Toptal takes care of all the overhead, empowering you to focus on successful engagements while getting paid on time, at the rate you decide, every time.\n\nAs a freelance developer, you could join an ever-expanding community of experts in over 120 countries, working remotely on the projects that meet your career ambitions.\n\nThat’s why the world’s top 3% of developers choose Toptal. Developers in our elite network share:\n\n* English language proficiency\n* 3+ years of professional experience as a software developer \n* Proficiency in at least one of the following languages is a strong advantage: **React, Ruby on Rails, Python, Swift, iOS, React Native, Azure, Flutter, Go, Unity, Node.js, Shopify or Salesforce**\n* Full-time availability is a strong advantage\n* Project management skills\n* Keen attention to detail\n\nCurious to know how much you could make? Check out our **[developer rate calculator](https://topt.al/Ddc5wb)**.\n\nIf you’re interested in becoming part of the Toptal network, take the next step by clicking apply and filling out the short form: **[https://topt.al/8JcdXd](https://topt.al/8JcdXd)**\n\n# Responsibilities\n* After passing our screening process, you will have access to our network of clients across the globe including leading Fortune 500s and innovative Silicon Valley start-ups.\n* You will have full flexibility to set your working hours per week and your rate. There are no mandatory hours.\n* You will have visibility into all projects published that fit your specialization. Our matching team is here to help you identify the projects that are the best fit for your skills and preferences.\n* As a client-oriented company, we empower you to fully focus on client objectives. We ensure that you always get paid on time for the hours you spend working with clients.\n# Requirements\n* You must have 3+ years of software development experienceβ€”preference given to candidates who have experience working for enterprise companies.\n* Proficiency in React, Ruby on Rails, Python, Swift, iOS, React Native, Azure, Flutter, Go, Unity, Node.js, Shopify or Salesforce is a strong advantage. Experience with additional frameworks and technologies is a bonus.\n* You consider multiple quality dimensions like user impact, failure tolerance, code maintenance, implementation time, security breaches, and performance.\n* You are genuinely interested in technology and love to try new things.\n* You are willing to help clients make important product and development decisions, share your knowledge with them, and help them achieve their objectives. You solve complex problems but also consider multiple solutions, weigh them, and decide on the best course of action.\n* You must be a world-class individual contributor to thrive at Toptal. You’re excited about working independently while keeping all relevant stakeholders continuously informed and up-to-speed with any challenges, set realistic expectations, and deliver the desirable quality. You thrive on providing and receiving honest but always constructive feedback.\n* Full-time availability is a strong advantage.\n\n \n\n#Salary and compensation\n$50,000 — $300,000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at Toptal

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Netdata Inc

 This job is getting a relatively high amount of applications currently (16% of viewers clicked Apply)

This position is a Remote OK original posting closed
🌏 Worldwide

Senior Site Reliability  This job is getting a relatively high amount of applications currently (16% of viewers clicked Apply)


Netdata Inc

🌏 WorldwideOriginally posted on Remote OK

javascript

 

go

 

c

 

javascript

 

go

 

c

 
This job post is closed and the position is probably filled. Please do not apply.
Netdata is looking for Senior Site Reliability / DevOps Engineers proficient in CI/CD methodologies, coupled with strong experience in software written in Javascript, Go, C, Python or other scripting languages, to join our distributed (remote) engineering team.\n\nAs a Senior SRE/DevOps engineer you will focus on supporting our netdata cloud offerings, augmenting our existing development infrastructure by implementing the automations necessary to catalyze further development of both our open-source project and our commercial offerings and last, but certainly not least, participating in the development of Netdata by making sure it's a first class citizen in various operating environments (e.g. orchestrated containers, IoT devices etc.)\n\nYour work will include building CI/CD pipelines, packaging, installation facilities and operational processes as well as developing custom solutions for our various teams and systems. As a Netdata SRE/DevOps engineer you will also be assisting engineers across our company, enabling them to provide world-class solutions for numerous platforms; as well as our community, open-source contributors and team-members with your deep knowledge of systems and troubleshooting skills.\n\n\n**Responsibilities**\n\n* Develop our automated CI/CD, packaging, deployment and execution environment infrastructure.\n* Develop automation tools to catalyse existing development or operational processes.\n* Evaluate, architect and develop technology options for our infrastructure and systems.\n* Troubleshoot, maintain, enhance and augment our platform.\n* Automate tasks wherever possible.\n* Stay up-to-date on emerging technologies.\n\n**Job Requirements**\n\n**Required experience**\n\n* A bachelor's degree in Computer Science or equivalent\n* 3+ years of experience on CI/CD tools (Travis, Gitlab, AWS, Azure, etc) and methodologies\n* Minimum 3 years of Linux systems development and/or administration.\n* Minimum 2 years of experience with at least one scripting language, coupled with related automation projects\n* Previous experience with cloud-based technologies and surrounding operational processes\n* Self motivated, conscientious, with a problem-solving, hands-on mindset.\n* Perfectionist where it matters, but also pragmatic, with effective time management skills.\n* Team player, eager to help.\n* Excellent analytical skills.\n* Excellent command of spoken and written English.\n \n**Preferred experience**\n\n* Minimum 2 years of Go, Javascript and C development experience in demanding environments.\n* Expert on Continuous Integration, with long experience in Test Automation\n* 5+ years of shell scripting experience, on at least 2 languages (BASH, python, perl, ruby, etc.)\n* Minimum 2 years of experience with Google Cloud app engine and surrounding operational processes\n* Experience on configuration management and tools to support it (Ansible, puppet, etc.)\n* Experience with monitoring solutions and service assurance in general.\n* A linux, cross-distribution artisan. A good amount of knowledge on windows system administration\n* Open source contributor\n* Agile Development Methodology\n\n#Location\n🌏 Worldwide


See more jobs at Netdata Inc

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
***Design your lifestyle as a top freelance developer, with the freedom to work however, wherever, on your terms. ***\n\nFreelance work is defining the careers of today’s developers in exciting new ways. If you’re passionate about working flexibly with leading Fortune 500 brands and innovative Silicon Valley startups, Toptal could be a great fit for your next career shift.\n\nToptal is an elite talent network for the world’s top 3% of developers, connecting the best and brightest freelancers with top organizations. Unlike a 9-to-5 job, you’ll choose your own schedule and work from anywhere. **Jobs come to you, so you won’t bid for projects against other developers in a race to the bottom.** Plus, Toptal takes care of all the overhead, empowering you to focus on successful engagements while getting paid on time, at the rate you decide, every time.\n\nAs a freelance developer, you could join an ever-expanding community of experts in over 120 countries, working remotely on the projects that meet your career ambitions.\n\nThat’s why the world’s top 3% of developers choose Toptal. Developers in our elite network share:\n\n* English language proficiency\n* 3+ years of professional experience\n* Project management skills\n* A keen attention to detail\n\nCurious to know how much you could make? Check out our [developer rate calculator](https://topt.al/azc266).\n\nIf you’re interested in becoming part of the Toptal network, take the next step by clicking apply and filling out the short form: **[https://topt.al/QAc4Xw](https://topt.al/QAc4Xw)**\n# Responsibilities\n* After passing our screening process, you will have access to our network of clients across the globe including leading Fortune 500s and innovative Silicon Valley start-ups.\n* You will have full flexibility to set your working hours per week and your rate. There are no mandatory hours.\n* You will have visibility into all projects published that fit your specialization. Our matching team is here to help you identify the projects that are the best fit for your skills and preferences.\n* As a client-oriented company, we empower you to fully focus on client objectives. We ensure that you always get paid on time for the hours you spend working with clients.\n\n# Requirements\n* You must have 3+ years of software development experienceβ€”preference given to candidates who have experience working for enterprise companies.\n* Proficiency in Ruby on Rails, Python, Node.js or PHP is a must. Experience with additional frameworks and technologies is a bonus.\n* You consider multiple quality dimensions like user impact, failure tolerance, code maintenance, implementation time, security breaches, and performance.\n* You are genuinely interested in technology and love to try new things.\n* You are willing to help clients make important product and development decisions, share your knowledge with them, and help them achieve their objectives. You solve complex problems but also consider multiple solutions, weigh them, and decide on the best course of action.\n* You must be a world-class individual contributor to thrive at Toptal. You’re excited about working independently while keeping all relevant stakeholders continuously informed and up-to-speed with any challenges, set realistic expectations, and deliver the desirable quality. You thrive on providing and receiving honest but always constructive feedback.\n \n\n#Salary and compensation\n50k-300k/year\n\n\n#Location\n🌏 Worldwide


See more jobs at Toptal

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

GeoComm


This position is a Remote OK original posting verified closed

Senior Software Engineersecurity Video Integration


GeoComm

Originally posted on Remote OK

esri

 

gis

 

esri

 

gis

 

agile

This job post is closed and the position is probably filled. Please do not apply.
We are looking for a motivated and experienced senior software engineer to help enhance our development effort using a cutting-edge tech stack. Successful candidates will demonstrate a passion for high quality software, have strong engineering principles and methodical problem-solving skills. This is a unique opportunity to build products that truly make a difference. This position is exempt and reports directly to the Joint Operations General Manager. \nQualifications\nBS/MS in Computer Science or Software Engineering\n7+ years of experience developing software applications and web services\nProgramming experience in Python, C# / .NET, JavaScript or TypeScript\nWorking experience with video camera system SDKs and APIs\nWorking experience with frameworks such as Angular\nWorking experience with SQL databases\nWorking knowledge of Git version control\nHands on experience creating responsive web applications using modern frameworks\nExperience designing applications that operate on cloud environments such as AWS or Azure\nAbility to establish priorities and work independently on multiple tasks\nKnowledge of Agile software development methodologies and practices\nPreferred Experience\nExperience developing, maintaining, and innovating large scale, consumer facing applications\nFamiliar with the development challenges inherent with highly scalable and available web applications\nExperience with open source technologies\nExperience with various modern web frameworks\nExperience developing GIS applications using Esri technology\nExperience with Docker\nGeo-Comm is an equal opportunity employer and does not discriminate in hiring or employment on the basis of race, color, religion, sex, national origin, age, disability, marital status, familial status, sexual orientation, veteran status or any other status protected by applicable law.\nGeo-Comm Corporation provides a drug-free working environment and is an Equal Opportunity Employer.


See more jobs at GeoComm

Visit GeoComm's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
133ms