Remote Engineer + Amazon Web Services + Scala + Etl Jobs in Jul 2020 Open Startup
RSS
API
Remote HealthPost a job

find a remote job
work from anywhere

Browse Remote in July 2020 at Vidiq while working as a Senior Data Engineer. Last post

Test A
Test B
Test C

Browse Remote in July 2020 at Vidiq while working as a Senior Data Engineer. Last post

Remote HealthPost a job

Get a  email of all new remote Engineer + Amazon Web Services + Scala + Etl jobs

Subscribe
×

  Jobs

  People

πŸ‘‰ Hiring for a remote Engineer + Amazon Web Services + Scala + Etl position?

Post a job
on the πŸ† #1 remote jobs board

Previously

The first health insurance for remote startups
A fully equipped health insurance that works for all your global employees

vidIQ


Senior Data Engineer


Timezones: from UTC-2 to UTC+8

Senior Data Engineer


vidIQ

Timezones: from UTC-2 to UTC+8

python

scala

etl

aws

python

scala

etl

aws


πŸ‘ 2,802 viewed | ✍️ 252 applied (9%)
**About vidIQ:**\n\nvidIQ helps YouTube creators and brands generate more views and subscribers, while saving time. With over 1 Million active weekly users, we are the #1 Chrome Extension for YouTube creators, with clients including Red Bull, Buzzfeed, PBS, TMZ, BBC as well as hundreds of thousands of the largest YouTube creators in the world. We’re backed by top Silicon Valley investors including Scott Banister and Mark Cuban. vidIQ is profitable with a fully remote team over 25 employees and growing.\n\n**Role & Responsibilities**\n\nvidIQ is seeking a highly-motivated Senior Data Engineer with 5+ years of hands-on data engineering experience to join our growing team. The ideal candidate will be a go-getter with the ability to work independently. In this role, you will have oversight of partitioning data, building an ETL pipeline, data compaction, and AWS optimization. \n\n\nYou must be highly collaborative and a self-starter who is able to work in a fast-paced environment. Strong communication skills are essential in this role, as it will be integral in communicating to the back-end team where and how to implement data integration and persistence. You will also communicate to management the volumes of data we are gathering, as well as communicate the data access points and how to use this data, to the team and management. \n\n# Responsibilities\n **You will be a good fit for this role if the following are true:**\n\n* You love building things. You like new challenges and strive to ship new features to customers on a regular basis.\n* You love to learn. You enjoy keeping up with the latest trends. If a project uses a tool that’s new to you, you dive into the docs and tutorials to figure it out.\n* You act like an owner. When bugs appear, you document and fix them. When projects are too complex, you work with others to refine the scope until it’s something you believe can be built in a reasonable amount of time and maintained in the long run.\n* You care about code quality. You believe simple is better and strive to write code that is easy to read and maintain. \n* You consider edge cases and write tests to handle them. When you come across legacy code that is difficult to understand, you add comments or refactor it to make it easier for the next person.\n* You understand balance. Great products must balance performance, customer value, code quality, dependencies, and so on. You know how to consider all of these concerns while keeping your focus on shipping things.\n* You over-communicate by default. If a project is off-track, you bring it up proactively and suggest ways to simplify and get things going. You proactively share status updates without being asked and strive to keep things as honest and transparent as possible. \n\n# Requirements\n**Minimum experience:**\n\n* 5+ years experience using *Python* for internal data pipelines (moving data inside *AWS* account)\n* *numpy*, pandas.\n* Experience with *Scala* - for external data pipelines (moving data from outside of *AWS* account into *AWS* account) *FS2*, *http4s*.\n* Additional experience with *DynamoDB*, *Lambda*, *Athena*, *S3*, *AWS* *GlueFamiliar* with *Spark* (in the moment Scala only) preferred.\n* Hands-on experience with data workflow orchestration (*Airflow*). \n\n#Salary\n$70,000\n\n\n#Location\n- Timezones: from UTC-2 to UTC+8

See more jobs at vidIQ

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

πŸ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

FeedbackIf you find a bug, or have feedback, write it here. Please no job applications in here, click Apply instead! Thanks for the message! We will get back to you soon.
Send feedback