Remote Python + Etl Jobs in Sep 2020 Open Startup
RSS
API
Global PayrollPost a job

find a remote job
work from anywhere

Browse 2 Remote Python Etl Jobs in September 2020 at companies like Vidiq and Doximity working as a Python Software Engineer, Data Integration or Senior Data Engineer. Last post

Browse 2 Remote Python Etl Jobs in September 2020 at companies like Vidiq and Doximity working as a Python Software Engineer, Data Integration or Senior Data Engineer. Last post

Global PayrollPost a job

Get a  email of all new remote Python + Etl + jobs

Subscribe
×

  Jobs

  People

👉 Hiring for a remote Python + Etl position?

Post a job
on the 🏆 #1 remote jobs board

Previous remote Python + Etl jobs

vidIQ


Senior Data Engineer


Timezones: from UTC-2 to UTC+8

Senior Data Engineer


vidIQ

Timezones: from UTC-2 to UTC+8

python

 

scala

 

etl

 

aws

 

python

 

scala

 

etl

 

aws

 
This job post is closed and the position is probably filled. Please do not apply.
**About vidIQ:**\n\nvidIQ helps YouTube creators and brands generate more views and subscribers, while saving time. With over 1 Million active weekly users, we are the #1 Chrome Extension for YouTube creators, with clients including Red Bull, Buzzfeed, PBS, TMZ, BBC as well as hundreds of thousands of the largest YouTube creators in the world. We’re backed by top Silicon Valley investors including Scott Banister and Mark Cuban. vidIQ is profitable with a fully remote team over 25 employees and growing.\n\n**Role & Responsibilities**\n\nvidIQ is seeking a highly-motivated Senior Data Engineer with 5+ years of hands-on data engineering experience to join our growing team. The ideal candidate will be a go-getter with the ability to work independently. In this role, you will have oversight of partitioning data, building an ETL pipeline, data compaction, and AWS optimization. \n\n\nYou must be highly collaborative and a self-starter who is able to work in a fast-paced environment. Strong communication skills are essential in this role, as it will be integral in communicating to the back-end team where and how to implement data integration and persistence. You will also communicate to management the volumes of data we are gathering, as well as communicate the data access points and how to use this data, to the team and management. \n\n# Responsibilities\n **You will be a good fit for this role if the following are true:**\n\n* You love building things. You like new challenges and strive to ship new features to customers on a regular basis.\n* You love to learn. You enjoy keeping up with the latest trends. If a project uses a tool that’s new to you, you dive into the docs and tutorials to figure it out.\n* You act like an owner. When bugs appear, you document and fix them. When projects are too complex, you work with others to refine the scope until it’s something you believe can be built in a reasonable amount of time and maintained in the long run.\n* You care about code quality. You believe simple is better and strive to write code that is easy to read and maintain. \n* You consider edge cases and write tests to handle them. When you come across legacy code that is difficult to understand, you add comments or refactor it to make it easier for the next person.\n* You understand balance. Great products must balance performance, customer value, code quality, dependencies, and so on. You know how to consider all of these concerns while keeping your focus on shipping things.\n* You over-communicate by default. If a project is off-track, you bring it up proactively and suggest ways to simplify and get things going. You proactively share status updates without being asked and strive to keep things as honest and transparent as possible. \n\n# Requirements\n**Minimum experience:**\n\n* 5+ years experience using *Python* for internal data pipelines (moving data inside *AWS* account)\n* *numpy*, pandas.\n* Experience with *Scala* - for external data pipelines (moving data from outside of *AWS* account into *AWS* account) *FS2*, *http4s*.\n* Additional experience with *DynamoDB*, *Lambda*, *Athena*, *S3*, *AWS* *GlueFamiliar* with *Spark* (in the moment Scala only) preferred.\n* Hands-on experience with data workflow orchestration (*Airflow*). \n\n#Salary\n$70,000\n\n\n#Location\nTimezones: from UTC-2 to UTC+8


See more jobs at vidIQ

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
Doximity is transforming the healthcare industry. Our mission is to help doctors save time so they can provide better care for patients. We have one of the richest healthcare datasets in the world, and our team brings a diverse set of technical and cultural backgrounds.\n\nYou will join our data engineering team build and maintain all aspects of our ingestion data pipelines, ETL processes, and data warehousing.\n\n**How you’ll make an impact:**\n\n* Collaborate with product managers, data analysts, and data scientists to develop pipelines and ETL tasks in order to facilitate the extraction of insights from data.\n* Build and maintain efficient data integration, matching, and ingestion pipelines.\n* Establish data architecture processes and practices that can be scheduled, automated, replicated and serve as standards for other teams to leverage. \n* Spearhead, plan and carry out the implementation of solutions while self-managing.\n\n**What we’re looking for:**\n\n* Mastery in Python and SQL.\n* Ability to write efficient, resilient, and evolvable ETL pipelines. \n* Passion for clean code and testing with Pytest, FactoryBoy, or equivalent. \n* Experience building data pipelines with Spark is a big plus.\n* Astute ability to self-manage, prioritize, and deliver functional solutions.\n* Advanced knowledge of Unix, Git, and AWS tooling. \n* Experience with data modeling, entity-relationship modeling, normalization, and dimensional modeling.\n\n**About Doximity**\n\nWe’re thrilled to be named the Fastest Growing Company in the Bay Area, and one of Fast Company’s Most Innovative Companies. Joining Doximity means being part of an incredibly talented and humble team. We work on amazing products that over 70% of US doctors (and over one million healthcare professionals) use to make their busy lives a little easier. We’re driven by the goal of improving inefficiencies in our $2.5 trillion U.S. healthcare system and love creating technology that has a real, meaningful impact on people’s lives. To learn more about our team, culture, and users, check out our careers page, company blog, and engineering blog. We’re growing fast, and there’s plenty of opportunity for you to make an impact—join us!\n\nDoximity is proud to be an equal opportunity employer, and committed to providing employment opportunities regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, pregnancy, childbirth and breastfeeding, age, sexual orientation, military or veteran status, or any other protected classification. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.


See more jobs at Doximity

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
FeedbackIf you find a bug, or have feedback, write it here. Please no job applications in here, click Apply instead! If you want to advertise, we do not do CPA/aff/perf ads ever.Thanks for the message! We will get back to you soon.

[Spam check] What is the name of Elon Musk's company going to Mars?

Send feedback