Remote Engineer + Data Jobs in Sep 2019 📈 Open Startup
RSS
API
Post a Job

get a remote job
you can do anywhere

5 Remote Engineer Data Jobs at companies like Doximity, Creative Commons and Good Eggs last posted 14 days ago

5 Remote Engineer Data Jobs at companies like Doximity, Creative Commons and Good Eggs last posted 14 days ago

Get a  email of all new remote Engineer + Data jobs

Subscribe
×

  Jobs

  People

👉 Hiring for a remote Engineer + Data position?

Post a Job - $299
on the 🏆 #1 remote jobs board

Last 30 days

Doximity is transforming the healthcare industry. Our mission is to help doctors be more productive, informed, and connected. As a software engineer focused on our data stack, you'll work within cross-functional delivery teams alongside other engineers, designers, and product managers in building software to help improve healthcare. \n\nOur [team](https://www.doximity.com/about/company#theteam) brings a diverse set of technical and cultural backgrounds and we like to think pragmatically in choosing the tools most appropriate for the job at hand.  \n\n**About Us**\n* We rely heavily on Python, Airflow, Spark, MySQL and Snowflake for most of our data pipelines\n* We have over 350 private repositories in Github containing our pipelines, our own internal multi-functional tools, and [open-source projects](https://github.com/doximity)\n* We have worked as a distributed team for a long time; we're currently [about 65% distributed](https://blog.brunomiranda.com/building-a-distributed-engineering-team-85d281b9b1c)\n* Find out more information on the [Doximity engineering blog](https://engineering.doximity.com/)\n* Our [company core values](https://work.doximity.com/)\n* Our [recruiting process](https://engineering.doximity.com/articles/engineering-recruitment-process-doximity)\n* Our [product development cycle](https://engineering.doximity.com/articles/mofo-driven-product-development)\n* Our [on-boarding & mentorship process](https://engineering.doximity.com/articles/software-engineering-on-boarding-at-doximity)\n\n**Here's How You Will Make an Impact**\n\n* Collaborate with product managers, data analysts, and data scientists to develop pipelines and ETL tasks in order to facilitate the extraction of insights from data.\n* Build, maintain, and scale data pipelines that empower Doximity’s products.\n* Establish data architecture processes and practices that can be scheduled, automated, replicated and serve as standards for other teams to leverage.\n* Spearhead, plan, and carry out the implementation of solutions while self-managing.\n\n**About you**\n\n* You have at least three years of professional experience developing data processing, enrichment, transformation, and integration solutions\n* You are fluent in Python, an expert in SQL, and can script your way around Linux systems with bash\n* You are no stranger to data warehousing and designing data models\n* Bonus: You have experience building data pipelines with Apache Spark in a multi-database ecosystem\n* You are foremost an engineer, making you passionate for high code quality, automated testing, and other engineering best practices\n* You have the ability to self-manage, prioritize, and deliver functional solutions\n* You possess advanced knowledge of Unix, Git, and AWS tooling\n* You agree that concise and effective written and verbal communication is a must for a successful team\n* You are able to maintain a minimum of 5 hours overlap with 9:30 to 5:30 PM Pacific time\n* You can dedicate about 18 days per year for travel to company events\n\n**Benefits**\n\nDoximity has industry leading benefits. For an updated list, see our career page\n\n**More info on Doximity\n**\nWe’re thrilled to be named the Fastest Growing Company in the Bay Area, and one of Fast Company’s Most Innovative Companies. Joining Doximity means being part of an incredibly talented and humble team. We work on amazing products that over 70% of US doctors (and over one million healthcare professionals) use to make their busy lives a little easier. We’re driven by the goal of improving inefficiencies in our $3.5 trillion U.S. healthcare system and love creating technology that has a real, meaningful impact on people’s lives. To learn more about our team, culture, and users, check out our careers page, company blog, and engineering blog. We’re growing steadily, and there’s plenty of opportunities for you to make an impact.\n\n\n*Doximity is proud to be an equal opportunity employer, and committed to providing employment opportunities regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, pregnancy, childbirth and breastfeeding, age, sexual orientation, military or veteran status, or any other protected classification. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.* \n\n#Location\n- North America

See more jobs at Doximity

Apply for this Job

👉 Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Creative Commons


Senior Data Engineer

verified
🌏Worldwide

Senior Data Engineer


Creative Commons

🌏Worldwide verified

data

engineering

engineer

senior

data

engineering

engineer

senior

🌏Worldwide24d
Creative Commons is building a “front door” to the growing universe of openly licensed and public domain content through CC Search and the CC Catalog API. The Senior Data Engineer reports to the Director of Engineering and is responsible for CC Catalog, the open source catalog that powers those products. This project will unite billions of records for openly-licensed and public domain works and metadata, across multiple platforms, diverse media types, and a variety of user communities and partners.\n\n**Diversity & inclusion**\n\nWe believe that diverse teams build better organizations and better services. Applications from qualified candidates from all backgrounds, including those from under-represented communities, are very welcome. Creative Commons works openly as part of a global community, guided by collaboratively developed codes of conduct and anti-harassment policies.\n\n**Work environment and location**\n\nCreative Commons is a fully-distributed organization - we have no central office. You must have reasonable mobility for travel to twice-annual all-staff meetings and the CC Global Summit (a total of 3 trips per year). We provide a subsidy towards high-speed broadband access. Laptop/desktop computer and necessary resources are supplied.\n\n\n\n# Responsibilities\n **Primary responsibilities**\nArchitect, build, and maintain the existing CC Catalog, including:\n* Ingesting content from new and existing sources of CC-licensed and public domain works.\n* Scaling the catalog to support billions of records and various media types.\n* Implementing resilient, distributed data solutions that operate robustly at web scale.\n* Automating data pipelines and workflows.\n* Collaborating with the Backend Software Engineer and Front End Engineer to support the smooth operation of the CC Catalog API and CC Search.\n\nAugment and improve the metadata associated with content indexed into the catalog using one or more of the following: machine learning, computer vision, OCR, data analysis, web crawling/scraping.\n\nBuild an open source community around the CC Catalog, including:\n* Restructuring the code and workflows such that it allows community contributors to identify new sources of content and add new data to the catalog.\n* Guiding new contributors and potentially participating in projects such as Google Summer of Code as a mentor. \n* Writing blog posts, maintaining documentation, reviewing pull requests, and responding to issues from the community.\n\nCollaborate with other outside communities, companies, and institutions to further Creative Commons’ mission. \n\n# Requirements\n* Demonstrated experience building and deploying large scale data services, including database design and modeling, ETL processing, and performance optimization\n* Proficiency with Python\n* Proficiency with Apache Spark\n* Experience with cloud computing platforms such as AWS\n* Experience with Apache Airflow or other workflow management software\n* Experience with machine learning or interest in picking it up\n* Fluent in English\n* Excellent written and verbal communication skills\n* Ability to work independently, build good working relationships and actively communicate, contribute, and speak up in a remote work structure\n* Curiosity and a desire to keep learning\n* Commitment to consumer privacy and security\n\nNice to have (but not required):\n* Experience with contributing to or maintaining open source software\n* Experience with web crawling\n* Experience with Docker\n \n\n#Salary\n100000 -120000\n \n\n#Location\n- 🌏Worldwide

See more jobs at Creative Commons

# How do you apply? Please email your cover letter and resume as a single PDF to “[email protected]” with the subject heading of “Data Engineer / [Last Name].” Phone calls and messages will not be returned.
Apply for this Job

👉 Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

This year

Good Eggs


Senior Data Platform Engineer

verified
🇺🇸US-only

Senior Data Platform Engineer


Good Eggs

🇺🇸US-only verified

data

snowflake

dbt

devops

data

snowflake

dbt

devops

🇺🇸US-only5mo
At Good Eggs, we believe feeding your family well shouldn’t come with a trade off — be it your time, your standards, or your wallet. We’re pioneering a new way to fill your fridge, by sourcing the best food from producers we know and trust, and bringing it straight to you — all at a price the same or less than your grocery store.\n\nWe run a healthy agile engineering process with:\n\n* pair programming\n* test-driven development\n* continuous deployment\n\n\n# Responsibilities\n We're looking for a Data Platform Engineer who is interested in a multidisciplinary engineering environment and is excited to support the culture of data alongside a passionate, mission-driven team.\n\nAs a Data Platform Engineer, you'll work on ingest, modeling, warehousing, BI tools, and have significant influence over the tools & processes we deliver to our customers (Analysts, Engineers, Business Leaders). We have a modern data platform and a strong team of DevOps Engineers and Full-Stack Data Analysts to collaborate with. Some of the tech involved:\n\n* custom code written in multiple languages (primarily Node.js/Typescript, but also Python and Go)\n* Fivetran & Segment\n* Snowflake\n* dbt\n* Mode Analytics\n* a modern, AWS-based, containerized application platform \n\n# Requirements\n**Ideal candidates will have:**\n* A desire to use their talents to make the world a better place\n* 2+ years of agile software development experience including automated testing and pair programming\n* 3+ years of full time, Data experience (ETL, warehousing, modeling, supporting Analysts)\n* interest in learning and adopting new tools and techniques\n* Bachelor’s degree in computer science, computer engineering or equivalent experience\n\n**Experience in some of the following areas:**\n* Node.js/Typescript, Go, Python, SQL\n* DevOps, cloud infrastructure, developer tools\n* Container-based deployments, microservice architecture\n\n**Bonus points for:**\n* Previous work experience involving e-commerce, physical operations, finance, or BizOps\n* Being data-driven - ability to get insights from data\n* Experience with dimensional modeling and/or BEAM* \n\n#Location\n- 🇺🇸US-only

See more jobs at Good Eggs

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

👉 Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.


Stats (beta): 👁 809 views,✍️ 0 applied (0%)
**The Company**\nSynergy Sports Technology, named by Fast Company as one of the world's top 10 most innovative companies in sports, seeks talented **Senior Backend Data Platform Engineers** to join our team on a long term contract basis. \nThis position offers a tremendous opportunity to work with the only company that delivers on-demand professional-level basketball, baseball, and hockey analytics linked to supporting video to nearly 1500 college, professional, and international teams. Our systems are highly complex and contains petabytes of data and video requiring extremely talented engineers to maintain scale and efficiency of its products.\nAs a member of the Synergy team, its engineering team will contribute to the ongoing development of Synergy’s revolutionary online sports data and video delivery solutions. Building applications such as:\n* Client Analytic Tools\n* Video Editing and Capture Tools\n* Data Logging Tools\n* Operational Game, Data and Video Pipeline Tools\n* Backend Data and Video Platforms\n\nSynergy’s work environment is geographically distributed, with employees working from home offices. The successful candidate must be comfortable working in a virtual office using online collaboration tools for all communication and interaction in conversational English. Synergy development staff work in a deadline-oriented, demanding, non-standard environment in which personal initiative and a strong work ethic are rewarded. Good communication skills, self-motivation, and the ability to work effectively with minimal supervision are crucial. Nonstandard working hours may be required, as Synergy operates on a 24x7 system for clients, with associated deadlines and requirements. Pay rate is dependent on experience.\nInformation for all Positions:\n* All Positions will last for roughly a year with some engineers lasting even longer if they are talented, we will keep them for future projects (contracts are renewing every year)\n* Engineers should be available for phone calls M-F from 7am to 10am Pacific Time zone. There will usually be 1 or 2 phone calls each week that are 30 to 90 minutes each. All other work hours availability is up to the engineer to work when it is a best fit and balance for them to communicate with their team and their personal commitments outside of work.\n* Working an average of 40 hours per week is expected except in rare or temporary circumstances. Each week can be flexible and up to the engineer as to when and how much they work per day. It is ok to work heavier and lighter weeks if desired based upon the engineer’s preference of when and how to work. But a preference is to average 40 hours per week.\n* No travel is required\n\n\n\n\n# Responsibilities\n **Team Objectives**\n\nA candidate joining the Data Platform team can expect to work on the following types of projects:\n* Creating internal and external APIs to support both data and video\n* Building complex data models supporting the business rules of sports\n* Developing algorithms that ingesting and transforming multiple streams of data and collapsing the data into a single event structure\n* Refactoring code to a .NET Core environment\n* Scaling out current systems to support new sports\n* Building build and test automation systems\n* Building complex reporting data structures for analytical systems \n\n# Requirements\n**Required Skill Sets**\n* NoSQL database (MongoDB Preferred)\n* C# (Latest version with a preference to .NET Core)\n

See more jobs at Synergy Sports Technology

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

👉 Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Doximity is transforming the healthcare industry. Our mission is to help doctors save time so they can provide better care for patients.\n\nWe value diversity — in backgrounds and in experiences. Healthcare is a universal concern, and we need people from all backgrounds to help build the future of healthcare. Our data team is deliberate and self-reflective about the kind of team and culture that we are building, seeking data engineers and scientists that are not only strong in their own aptitudes but care deeply about supporting each other's growth. We have one of the richest healthcare datasets in the world, and our team brings a diverse set of technical and cultural backgrounds.\n\nYou will join a small team of Software Engineers focusing on Data Engineering Infrastructure to build and maintain all aspects of our data pipelines, ETL processes, data warehousing, ingestion and overall data stack.\n\n**How you’ll make an impact:**\n\n* Help establish robust solutions for consolidating data from a variety of data sources.\n* Establish data architecture processes and practices that can be scheduled, automated, replicated and serve as standards for other teams to leverage. \n* Collaborate extensively with the DevOps team to establish best practices around server provisioning, deployment, maintenance, and instrumentation.\n* Build and maintain efficient data integration, matching, and ingestion pipelines.\n* Build instrumentation, alerting and error-recovery system for the entire data infrastructure.\n* Spearhead, plan and carry out the implementation of solutions while self-managing.\n* Collaborate with product managers and data scientists to architect pipelines to support delivery of recommendations and insights from machine learning models.\n\n**What we’re looking for:**\n\n* Fluency in Python, SQL mastery.\n* Ability to write efficient, resilient, and evolvable ETL pipelines. \n* Experience with data modeling, entity-relationship modeling, normalization, and dimensional modeling.\n* Experience building data pipelines with Spark and Kafka.\n* Comprehensive experience with Unix, Git, and AWS tooling.\n* Astute ability to self-manage, prioritize, and deliver functional solutions.\n\n**Nice to have:**\n\n* Experience with MySQL replication, binary logs, and log shipping.\n* Experience with additional technologies such as Hive, EMR, Presto or similar technologies.\n* Experience with MPP databases such as Redshift and working with both normalized and denormalized data models.\n* Knowledge of data design principles and experience using ETL frameworks such as Sqoop or equivalent. \n* Experience designing, implementing and scheduling data pipelines on workflow tools like Airflow, or equivalent.\n* Experience working with Docker, PyCharm, Neo4j, Elasticsearch, or equivalent. \n\n**About Doximity**\n\nWe’re thrilled to be named the Fastest Growing Company in the Bay Area, and one of Fast Company’s Most Innovative Companies. Joining Doximity means being part of an incredibly talented and humble team. We work on amazing products that over 70% of US doctors (and over one million healthcare professionals) use to make their busy lives a little easier. We’re driven by the goal of improving inefficiencies in our $2.5 trillion U.S. healthcare system and love creating technology that has a real, meaningful impact on people’s lives. To learn more about our team, culture, and users, check out our careers page, company blog, and engineering blog. We’re growing fast, and there’s plenty of opportunities for you to make an impact—join us!\n\n*Doximity is proud to be an equal opportunity employer, and committed to providing employment opportunities regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, pregnancy, childbirth and breastfeeding, age, sexual orientation, military or veteran status, or any other protected classification. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.* \n\n# Requirements\nUse apply button

See more jobs at Doximity

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

👉 Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.