Remote Data + Engineer Jobs in Apr 2021 Open Startup
RSS
API
Post a job

find a remote job
work from anywhere

Browse 13+ Remote Data Engineer Jobs in April 2021 at companies like Automattic, Air Miles and Culture Biosciences with salaries from $49,000/year to $150,000/year working as a Senior Frontend Engineer, Data Engineer or Senior Analytics Engineer. Last post

Join 92,126+ people and get a  email of all new remote Data + Engineer + jobs

Subscribe
×

  Jobs

  People

👉 Hiring for a remote Data + Engineer position?

Post a job
on the 🏆 #1 remote jobs board

This month's remote Data + Engineer jobs

Automattic

 

verified
🌏 Worldwide

Senior Analytics Engineer  


Automattic

🌏 Worldwide

data

 

analytics

 

etl

 

looker

 

data

 

analytics

 

etl

 

looker

 
We are the people behind WordPress.com, Jetpack, and WooCommerce. We’re looking for a Senior Analytics Engineer to join our analytics team. As the data landscape evolves, so do our roles.\n\n**As a Senior Analytics Engineer, you will**:\n* Make our data available: you’ll work to connect data sources, design database models, and build ETL pipelines via Airflow.\n* Make our data actionable: you’ll prototype and develop powerful Looker models to build smart dashboards that help our product and business teams succeed.\n* Perform ad hoc analyses to better understand customer behavior, needs, and individual test results.\n* Always be raising data quality. That includes auditing sources, documenting issues, driving resolutions together with other teams, and implementing well-documented, performant, and tested transformations.\n* Work together with teams inside and outside our data organization to define clear metrics and meaningful tracking to drive the growth of our business\n\n**We’d love to hear from you if**: \n* You’re proficient in SQL: window functions and CTEs are part of our daily work.\n* You have worked in a Hadoop ecosystem before (in the day to day, we use Impala, Hive, and Spark).\n* You have used a data flow scheduling system like Airflow.\n* You’re familiar with Scala, Java, Python, or PHP.\n* You have experience with data integration, connectors, and related APIs.\n* You have used a business intelligence platform: we use Looker, but other platforms and an ability to learn can go a long way.\n* You have experience working across teams to deliver analytics solutions and are familiar with common metrics of software as a service (SaaS) business.\n* You have excellent verbal and written communication skills in English.\n* You’re able to communicate clearly with and about data to technical and non-technical partners.\n* You’re highly collaborative and experienced in working with business owners, executives, developers, and creatives to discuss data, strategy, and tests.\n\nCurious about who we are and what we work on? Read our [blog](http://data.blog/)!\n\n**ABOUT AUTOMATTIC** \n\nWe are the company behind WordPress.com, WooCommerce, Jetpack, Simplenote, Longreads, VaultPress, Akismet, Gravatar, Crowdsignal, Cloudup, and more. We believe in making the web a better place.\n\nWe’re a distributed company with more than 1200 Automatticians in 70+ countries speaking 75+ different languages. Our common goal is to democratize publishing so that anyone with a story can tell it, regardless of income, gender, politics, language, or where they live in the world.\n\nWe believe in Open Source and the vast majority of our work is available under the GPL.\n\n**Diversity & Inclusion at Automattic**\n\nWe’re improving diversity in the tech industry. At Automattic, we want people to love their work and show respect and empathy to all. We encourage differences and strive to increase participation from traditionally underrepresented groups. Our D&I committee involves Automatticians across the company and drives grassroots change. For example, this group has helped facilitate private online spaces for affiliated Automatticians to gather and helps run a monthly D&I People Lab series for further learning. Diversity and Inclusion is a priority at Automattic, though our dedication influences far more than just Automatticians: We make our products freely available and translate our products into and offer customer support in numerous languages. We require unconscious bias training for our hiring teams and ensure our products are accessible across different bandwidths and devices. Read more about our dedication to diversity and inclusion.\n\n\n\n#Location\n🌏 Worldwide


See more jobs at Automattic

# How do you apply?\n\n Does this sound exciting? If yes, click the Apply button below and fill out our application form. Let us know what you can contribute to the team!\n\nPlease include answers to these questions in your application form:\n\n1. Tell us how the Analytics Engineer role fits into an Analytics organization, and how your experience and skills qualify you to play that role.\n2. Tell us about the time when you used analytics to accelerate business growth. How did you identify the opportunity, what actions did you take and what were the exact results of your contributions?\n
Apply for this position

Previous remote Data + Engineer jobs

Culture Biosciences


verified closed
🇺🇸 US-only

Senior Frontend Engineer


Culture Biosciences

🇺🇸 US-only

empathy

 

css

 

javascript

 

biotech

 

empathy

 

css

 

javascript

 

biotech

 
This job post is closed and the position is probably filled. Please do not apply.
*If there’s one thing that’s important to know about Culture it’s that we have a good culture.*\n\nFor example:\n- We value kindness and empathy over intellect and strong opinions.\n- We don’t use the word “resource” to refer to people.\n- Imagine a typical bro-y startup… now imagine the opposite… that’s us\n\n**The Company**\n\nCulture Biosciences grows organisms for biotech companies. We've built the first cloud bioreactor facility. Here’s how it works:\n\n* Our customers, biotechnology companies, design organisms (bacteria, yeast, mammalian cells) to produce products (materials, therapeutics, food).\n* We grow their cells in our bioreactors to optimize the yield of their products. Ultimately, we help our customers get their products to market faster.\n* Our customers receive their experimental results live on our website.\n* Our cloud bioreactor facility is made possible by custom software and automation technology that our team develops. The automated infrastructure is also more efficient to operate than traditional equipment. Our software also provides quick and simple data analysis, enabling customers to analyze reams of data quickly. \n\n**The Role**\n\nWe are looking for a frontend software engineer who wants to build the software platform that will transform biomanufacturing. You will collaborate with electrical, mechanical, biological, and chemical engineers to build our core technology. Your work will quickly impact cutting edge biotechnology companies by helping them get their products to market faster and more efficiently. \n \nWe have new and interesting software challenges. Our problems are more than scaling a web service: we model biological processes and operate mission-critical software within bioreactors that sometimes get wet. \n \nYou will work with our small, but growing, customer software team to design and build all aspects of our software for customers; from tools that allow our customers to turn around their experiments faster, to features that ensure the quality and safety of their experiments. \n\nWhat you'll do:\n* Help define our software engineering culture\n* Proactively solve the problems most important to the business\n* Write high quality software for the frontend\n* Write high quality software for the backend as well when needed\n* Do code reviews\n* Learn about fermentation, bio-manufacturing, and biotechnology\n\n\nIn return, we will support you by:\n* Placing a high degree of trust in your ideas and execution\n* Bringing you up to speed in the domain of fermentation\n* Providing a low-stress work environment\n* Making ourselves available for collaboration\n* Caring about you as a whole person, not a “resource”\n\nProjects on our horizon\n* Building the communication platform for bio-process planning and execution\n* Developing new data visualization and analysis tools for our customers\n* Expose complex bio-process protocol data in a simple understandable way\n* Provide live process controls (dangerous!) in a safe and intuitive way\n* And much, much more\n\n\nAbout You:\n* You have at least six years of experience in frontend software development\n* You know your way around modern JS, React, and css\n* You’re proactive and enjoy thinking about the big picture\n* You’re kind, curious and enjoy learning new things\n* You’re product focused - you think about the perspectives of end users even when it might not be expected of you\n* You value communication and connection with a multidisciplinary team\n* You care that you’re building something that solves a problem and helps end users\n\n\nBenefits include:\n* Competitive salary and equity compensation \n* Medical (PPO), Dental, Vision, and Life insurance\n* 401(k) plan with company match\n* 3 weeks of paid time off and 10 days of company holidays\n* 12 weeks of parental leave at full salary\n* Access to on-site child care facility, subject to availability\n* Free onsite breakfast, lunch, snacks, coffee, and gym (varies based on COVID-19 restrictions)\n* Support for relevant educational opportunities\n\nCulture Biosciences provides equal employment opportunities to all employees and applicants. We seek to build a company that promotes inclusion and expands the diversity of our industry as a whole. We encourage people with identities underrepresented in biotech and technology to apply. \n\n#Salary or Compensation\n$150,000 — $175,000/year\n\n\n#Location\n🇺🇸 US-only


See more jobs at Culture Biosciences

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

NannyML


verified closed
Europe, Africa or the Middle East (Utc-2 to Utc+3)

Senior Software Engineer


NannyML

Europe, Africa or the Middle East (Utc-2 to Utc+3)

data

 

machine learning

 

product

 

startup

 

data

 

machine learning

 

product

 

startup

 
This job post is closed and the position is probably filled. Please do not apply.
**About Us**\n\nNannyML is an early stage venture funded start-up. At NannyML we build enterprise software for supervising and correcting ML systems in production. That includes detecting data and concept drift, estimating performance loss and suggesting corrective actions as well as a dashboard that presents all these insights for business and technical users. Our goal is to ensure that ML systems keep adding value and that insights that can be extracted from ML systems are clearly communicated to business stakeholders. We want to make ML in production effortless to interact with and extract value from.\n\n\n\n**About the Role**\n\nWe are looking for a Senior Software Engineer to architect and build a great product. You will be working closely with the founding team. Our expertise is in leveraging business information, exploiting data and prototyping data solutions. Your expertise comes in to complement the team: you will be responsible for product development from the software and data engineering side: from designing engineering processes, brainstorming with the founders, through prototyping and first implementation to architectural choices and frameworks. You will have the ownership and the decision making power to shape everything that lies between product and research. As we grow NannyML we expect you to grow with us. We envision your path may grow your position into VP of engineering or similar.\n\nWe are an early stage startup, and so you will wear many hats and be expected to do what's needed for the company to succeed, including working on things that you don't know anything about and at weird times from time to time — we all are. You will have the opportunity to get meaningfully involved in the areas of product, engineering, hiring and people management among others.\n\nWe value freedom with responsibility, transparency and a growth mindset. We believe in generating our own luck by trying out new stuff, always asking, constantly learning, reading and meeting new people with different world-views. We value trying new things, and appreciate that from time to time things may break in the process. Working at NannyML you will have full autonomy to make impactful decisions and prioritise and organise your work the way you see fit.\n\n**Please do not apply for this role if you are not physically located in Europe, Africa or the Middle East (UTC-2 to UTC+3) or you are not fully willing to relocate immediately. This is a fully remote position, however we will be working with you very closely, so significant work hours overlap is necessary. You also need to be able to fly to Belgium or Portugal as needed.**\n\nExperience with early stage product is absolutely necessary. If you don't have such experience please do not apply. Please bear in mind that this is not a web development position, while web development might be a small part of what you do, it is not going to be your main focus.\n\n**Responsibilities**\n\n* Integrate and productionize Data Drift detection and prediction algorithms\n* Develop and deliver CI/CD, version control and testing frameworks\n* Brainstorm new features and shape the product road map together with the founding team and clients\n* Architect, design, and hands-on build the software\n* Produce clean, well-documented and efficient code\n* Handle automation, infrastructure and orchestration\n* Help with implementing NannyML at clients and with clients on-boarding\n\n**Requirements**\n\n**Basic**\n\n* You significantly contributed to building an early stage product\n* You have experience working with data (such as ML systems, big data or building data heavy products like BI tools)\n* Great communication skills in English - both oral and written\n* You are extremely proactive, independent and comfortable with proposing new ideas — and holding your ground when you believe you are right.\n* Strong experience with Python in back-end development, infrastructure or data engineering\n* 5+ years working as a Software Engineer or in a similar role\n* You live in or are willing and able to relocate to EU time zones\n\n**Bonus points**\n\n* You were the first engineer or the lead engineer in a venture funded startup\n* Experience with data engineering tools and containerization\n* Experience using system monitoring tools and automated testing frameworks\n* Experience building enterprise grade software\n* Significant experience with on-premise deployment and integration with enterprise IT systems.\n* Basic familiarity with Machine Learning\n* Masters degree in a STEM related field\n\n**Benefits**\n\n* Fully Remote Working Environment\n* 23+ Days of Planned Leave Annually\n* Paid sick leave and private healthcare plan\n* We support paid parental leave\n* Home office, work and well-being allowances (for yoga, gym etc.) and other nice benefits\n* Stock option plan\n* Salary: 54,000 - 66,000 EUR/year\n \n\n#Salary or Compensation\n$49,000 — $78,000/year\n\n\n#Location\nEurope, Africa or the Middle East (Utc-2 to Utc+3)


See more jobs at NannyML

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply.
InReach is changing how VC in Europe works, for good. Through data, software and Machine Learning, we are building an in-house platform to help us find, reach-out to and invest in early-stage European startups, regardless of the city or country they’re based in.\n\n\n\nWe are looking for a machine learning engineer to lead in the continued development of InReach’s machine learning capabilities. This involves cleaning / wrangling / merging / processing the data on companies and founders from across Europe, building algorithms to find new opportunities, and the pipelines for continuous improvement.\n\n\n\nIt is important to us that candidates be passionate about helping entrepreneurs and startups. This is our bread-and-butter and we want you to be involved.\n\n\n\nThis is a remote-first role, whether you're in the office in London or working remotely, so we are looking for someone with excellent written and spoken communication skills. InReach is a remote-first employer and we are looking to this hire to help us become an exceptional place to work for remote employees.\n\n\n\n**Background Reading**\n\n* [ InReach Ventures, the 'AI-powered' European VC, closes new €53M fund](https://techcrunch.com/2019/02/11/inreach-ventures-the-ai-powered-european-vc-closes-new-e53m-fund/?guccounter=1)\n\n* [The Full-Stack Venture Capital](https://medium.com/entrepreneurship-at-work/the-full-stack-venture-capital-8a5cffe4d71)\n\n* [Roberto Bonanzinga starts InReach Ventures with DIG platform](https://www.businessinsider.com/roberto-bonanzinga-starts-inreach-ventures-with-dig-platform-2015-11?r=US&IR=T)\n\n* [Exceptional Communication; our guidelines for remote working](https://www.craft.do/s/Isrjt4KaHMPQ)\n\n\n\n**Interview Process**\n\n* 15m video chat with Ben, CTO to find out more about InReach and the role\n\n* 2h data pipeline technical test working alongside Ben\n\n* 2h data science technical test working alongside Ghyslain, Product Manager\n\n* 30m architectural discussion with Ben, talking through the work you did on the pipeline\n\n* 30m data science discussion with Ghyslain, talking through the data science work\n\n* 2h interview with the different team members from across InReach. We’re a small company so it’s important we see how we’ll all work together - not just the tech team!\n\n# Responsibilities\n * Creatively and quickly coming up with effective solutions to undefined problems\n\n* Choosing technology that is modern but not hype-driven\n\n* Developing features and tests quickly with good, clean code\n\n* Researching and experimenting on algorithms in a structured fashion, using engineering discipline\n\n* Being part of the wider development team, reviewing code and participating in architecture from across the stack\n\n* Communicating exceptionally, both asynchronously (written) and synchronously (spoken)\n\n* Helping to shape InReach as a remote-first organization \n\n# Requirements\n**Skills**\n\n* Excellent spoken and written English\n\n* Experience working for a remote organization or be able to compellingly describe why you'll be great at it!\n\n* Great time management and communication\n\n\n\n**Technologies**\n\n* Python3\n\n* Jupyter Notebooks\n\n* Pipenv\n\n* Python Unittest\n\n* Postgres\n\n* Pandas\n\n\n\nNone of these are a prerequisite, but help:\n\n* SQS\n\n* Dynamodb\n\n* Scikit Learn\n\n* Pandas\n\n* AWS Lambda\n\n* Docker\n\n* Numpy\n\n* PyTorch \n\n#Salary or Compensation\n$70,000/year\n\n\n#Location\nUK or Italy


See more jobs at InReach Ventures

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Intrinio

 

verified closed
🇺🇸 US-only

Senior Ruby Engineer  


Intrinio

🇺🇸 US-only

dev

 

ruby

 

api

 

data

 

dev

 

ruby

 

api

 

data

 
This job post is closed and the position is probably filled. Please do not apply.
**Why Work at Intrinio**\n\nWe are a fast-paced and well-funded startup creating new technology in the financial data market. Our team is highly experienced and productive. We enjoy working together and advancing in our craft. Our goal is to produce world-class software that will significantly disrupt the world of finance, creating new efficiencies and encouraging innovation by smaller players.\n\n**About the Job**\n\nWe are looking for a senior-level Ruby software engineer. In this position, you will be actively contributing to the design and development of our financial data platform and products. If you have the skills and ability to build high-quality, innovative and fully functional software in-line with modern coding standards and solid technical architecture - we want to talk to you. Intrinio is a startup (20+ people), so you should be comfortable working on a small team, moving fast, breaking things, committing code several times a day, and delivering working software weekly.\n\n\n**Ideal candidates will have several of the following:**\n* Mastery of the Ruby programming language and its major frameworks\n* Knowledge of data stores and their use cases: SQL databases, Redis, and Elasticsearch\n* Experience with API development and usage\n* A history of learning new technology stacks and methodologies \n* Interest (or experience) in the financial markets\n* A track-record of public and/or private contributions on GitHub\n\n\n# Responsibilities\n * Write well-designed, testable, documented, and performant code\n* Commit code several times a day\n* Deliver working features on a weekly basis\n* Review the code of and help to mentor junior and mid-level developers\n* Communicate clearly and timely with managers and team members (we use a Kanban process with Monday and Slack) \n\n# Requirements\n* Significant time working remotly (please do not apply otherwise)\n* 5+ years of software engineering experience\n* Significant experience developing web applications and APIs\n* Strong knowledge of Relational Databases, SQL and ORM libraries\n\n#Location\n🇺🇸 US-only


See more jobs at Intrinio

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Kraken Digital Asset Exchange


closed
North America or Europe

Data Engineercryptowatch


Kraken Digital Asset Exchange

North America or Europe

data

 

engineer

 

growth

 

digital nomad

 

data

 

engineer

 

growth

 

marketing

 
This job post is closed and the position is probably filled. Please do not apply.
You will be an instrumental piece of a small team with a mandate to understand how Cryptowatch visitors and clients are utilizing the product. Succeeding in this role requires knowledge on architecting data systems, a deep understanding of how to measure user behavior, and ability to translate raw data in to easy-to-understand dashboards. You will work closely with marketers and product managers on the Growth team to design+build user behavior measurement infrastructure and translate this data into insights. By structuring and helping build measurement pipelines, you'll help the team learn about customers and drive growth. Your work will directly impact the product roadmap and bottom line of the Cryptowatch business. \n\nYou will also help establish measurement of key conversion and retention metrics, then use them to identify opportunities for improvement in the product experience. As a fullstack developer passionate about driving towards business goals, you will work up and down the stack and pick up new tools and frameworks quickly.\n\n# Responsibilities\n * Design and help implement data pipelines that collect, transform, and curate data to help our team understand user behavior on the site, using data from external tools and internal databases.\n* Work with the Cryptowatch Growth team to design lightweight experiments that help us learn about customers and drive key growth metrics. \n* Create structure and process around growth experimentation, data collection, and user research from the ground up.\n* Work with Business Operations, Strategy, Marketing, and Product to collectively grow our understanding of our customer base. \n\n# Requirements\n* 5+ years of work experience in relevant field (Data Engineer, DW Engineer, Software Engineer, etc).\n* You are comfortable with specing analytics from the end user's dashboard down to the events in our UI.\n* You have expertise with the React.JS framework.\n* You have experience with Golang and PostgreSQL.\n* You are quick to pick up new tools and frameworks.\n* You have a strong ability to search a large codebase to find what you’re looking for.\n* You are able to communicate effectively with businesspeople, designers, developers, marketers, product managers and the customer.\n* You always ask “why?” and love searching for ways to answer your questions quantitatively.\n* You are skilled in data visualisation and web analytics tools like Grafana and MixPanel.\n\n#Location\nNorth America or Europe


See more jobs at Kraken Digital Asset Exchange

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Thorn


closed
🇺🇸 US-only

Senior Data Engineer


Thorn

🇺🇸 US-only

data

 

engineering

 

engineer

 

work from home

 

data

 

engineering

 

engineer

 

work from home

 
This job post is closed and the position is probably filled. Please do not apply.
Thorn is a non-profit focused on building technology to defend children from sexual abuse. Working at Thorn gives you the opportunity to apply your skills, expertise and passions to directly impact the lives of vulnerable and abused children. Our staff solves dynamic, quickly evolving problems with our network of partners from tech companies, NGOs, and law enforcement agencies. If you are able to bring clarity to complexity and lightness to heavy problems, you could be a great fit for our team.\n\nEarlier this year, we took the stage at TED and shared our audacious goal of eliminating child sexual abuse material from the internet. A key aspect of our work is partnering with the National Center for Missing & Exploited Children and building technology to optimize the broader ecosystem combating online child sexual abuse.\n\n**What You'll Do**\n\n* Collaborate with other engineers on your team to build a data pipeline and client application from end-to-end.\n* Prototype, implement, test, deploy, and maintain stable data engineering solutions.\n* Work closely with the product manager and engineers to define product requirements.\n* Present possible technical solutions to various stakeholders, clearly explaining your decisions and how they address real user needs, incorporating feedback in subsequent iterations.\n\n**What We're Looking For**\n\n* You have a commitment to putting the children we serve at the center of everything you do.\n* You have proficient software development knowledge, with experience building, growing, maintaining a variety of products, and a love for creating elegant applications using modern technologies.\n* You’re experienced with devops (Docker, AWS, microservices) and can launch and maintain new services.\n* You are experienced with distributed data storage systems/formats such as MemSQL, Snowflake, Redshift, Druid, Cassandra, Parquet, etc.\n* You have worked with real-time systems using various open source technologies like Spark, MapReduce, NoSQL, Hive, etc.\n* You have knowledge in data modeling, data access, and data storage techniques for big data platforms.\n* You have an ability and interest in learning new technologies quickly.\n* You can work with shifting requirements and collaborate with internal and external stakeholders.\n* You have experience prototyping, implementing, testing, and deploying code to production.\n* You have a passion for product engineering and an aptitude to work in a collaborative environment, can demonstrate empathy and strong advocacy for our users, while balancing the vision and constraints of engineering.\n* You communicate clearly, efficiently, and thoughtfully. We’re a highly-distributed team, so written communication is crucial, from Slack to pull requests to code reviews.\n\n**Technologies We Use**\n\n*You should have experience with at least a few of these, and a desire and ability to learn the rest.*\n\n* Python\n* Elasticsearch / PostgreSQL\n* AWS / Terraform\n* Docker / Kubernetes\n* Node / Typescript \n\n#Salary or Compensation\n100000-150000/year\n\n\n#Location\n🇺🇸 US-only


See more jobs at Thorn

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
Doximity is transforming the healthcare industry. Our mission is to help doctors be more productive, informed, and connected. As a software engineer focused on our data stack, you'll work within cross-functional delivery teams alongside other engineers, designers, and product managers in building software to help improve healthcare. \n\nOur [team](https://www.doximity.com/about/company#theteam) brings a diverse set of technical and cultural backgrounds and we like to think pragmatically in choosing the tools most appropriate for the job at hand.  \n\n**About Us**\n* We rely heavily on Python, Airflow, Spark, MySQL and Snowflake for most of our data pipelines\n* We have over 350 private repositories in Github containing our pipelines, our own internal multi-functional tools, and [open-source projects](https://github.com/doximity)\n* We have worked as a distributed team for a long time; we're currently [about 65% distributed](https://blog.brunomiranda.com/building-a-distributed-engineering-team-85d281b9b1c)\n* Find out more information on the [Doximity engineering blog](https://engineering.doximity.com/)\n* Our [company core values](https://work.doximity.com/)\n* Our [recruiting process](https://engineering.doximity.com/articles/engineering-recruitment-process-doximity)\n* Our [product development cycle](https://engineering.doximity.com/articles/mofo-driven-product-development)\n* Our [on-boarding & mentorship process](https://engineering.doximity.com/articles/software-engineering-on-boarding-at-doximity)\n\n**Here's How You Will Make an Impact**\n\n* Collaborate with product managers, data analysts, and data scientists to develop pipelines and ETL tasks in order to facilitate the extraction of insights from data.\n* Build, maintain, and scale data pipelines that empower Doximity’s products.\n* Establish data architecture processes and practices that can be scheduled, automated, replicated and serve as standards for other teams to leverage.\n* Spearhead, plan, and carry out the implementation of solutions while self-managing.\n\n**About you**\n\n* You have at least three years of professional experience developing data processing, enrichment, transformation, and integration solutions\n* You are fluent in Python, an expert in SQL, and can script your way around Linux systems with bash\n* You are no stranger to data warehousing and designing data models\n* Bonus: You have experience building data pipelines with Apache Spark in a multi-database ecosystem\n* You are foremost an engineer, making you passionate for high code quality, automated testing, and other engineering best practices\n* You have the ability to self-manage, prioritize, and deliver functional solutions\n* You possess advanced knowledge of Unix, Git, and AWS tooling\n* You agree that concise and effective written and verbal communication is a must for a successful team\n* You are able to maintain a minimum of 5 hours overlap with 9:30 to 5:30 PM Pacific time\n* You can dedicate about 18 days per year for travel to company events\n\n**Benefits**\n\nDoximity has industry leading benefits. For an updated list, see our career page\n\n**More info on Doximity\n**\nWe’re thrilled to be named the Fastest Growing Company in the Bay Area, and one of Fast Company’s Most Innovative Companies. Joining Doximity means being part of an incredibly talented and humble team. We work on amazing products that over 70% of US doctors (and over one million healthcare professionals) use to make their busy lives a little easier. We’re driven by the goal of improving inefficiencies in our $3.5 trillion U.S. healthcare system and love creating technology that has a real, meaningful impact on people’s lives. To learn more about our team, culture, and users, check out our careers page, company blog, and engineering blog. We’re growing steadily, and there’s plenty of opportunities for you to make an impact.\n\n\n*Doximity is proud to be an equal opportunity employer, and committed to providing employment opportunities regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, pregnancy, childbirth and breastfeeding, age, sexual orientation, military or veteran status, or any other protected classification. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.*\n\n#Location\nNorth America


See more jobs at Doximity

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Creative Commons


verified closed
🌏 Worldwide

Senior Data Engineer


Creative Commons

🌏 Worldwide

data

 

engineering

 

engineer

 

senior

 

data

 

engineering

 

engineer

 

senior

 
This job post is closed and the position is probably filled. Please do not apply.
Creative Commons is building a “front door” to the growing universe of openly licensed and public domain content through CC Search and the CC Catalog API. The Senior Data Engineer reports to the Director of Engineering and is responsible for CC Catalog, the open source catalog that powers those products. This project will unite billions of records for openly-licensed and public domain works and metadata, across multiple platforms, diverse media types, and a variety of user communities and partners.\n\n**Diversity & inclusion**\n\nWe believe that diverse teams build better organizations and better services. Applications from qualified candidates from all backgrounds, including those from under-represented communities, are very welcome. Creative Commons works openly as part of a global community, guided by collaboratively developed codes of conduct and anti-harassment policies.\n\n**Work environment and location**\n\nCreative Commons is a fully-distributed organization - we have no central office. You must have reasonable mobility for travel to twice-annual all-staff meetings and the CC Global Summit (a total of 3 trips per year). We provide a subsidy towards high-speed broadband access. Laptop/desktop computer and necessary resources are supplied.\n\n\n\n# Responsibilities\n **Primary responsibilities**\nArchitect, build, and maintain the existing CC Catalog, including:\n* Ingesting content from new and existing sources of CC-licensed and public domain works.\n* Scaling the catalog to support billions of records and various media types.\n* Implementing resilient, distributed data solutions that operate robustly at web scale.\n* Automating data pipelines and workflows.\n* Collaborating with the Backend Software Engineer and Front End Engineer to support the smooth operation of the CC Catalog API and CC Search.\n\nAugment and improve the metadata associated with content indexed into the catalog using one or more of the following: machine learning, computer vision, OCR, data analysis, web crawling/scraping.\n\nBuild an open source community around the CC Catalog, including:\n* Restructuring the code and workflows such that it allows community contributors to identify new sources of content and add new data to the catalog.\n* Guiding new contributors and potentially participating in projects such as Google Summer of Code as a mentor. \n* Writing blog posts, maintaining documentation, reviewing pull requests, and responding to issues from the community.\n\nCollaborate with other outside communities, companies, and institutions to further Creative Commons’ mission. \n\n# Requirements\n* Demonstrated experience building and deploying large scale data services, including database design and modeling, ETL processing, and performance optimization\n* Proficiency with Python\n* Proficiency with Apache Spark\n* Experience with cloud computing platforms such as AWS\n* Experience with Apache Airflow or other workflow management software\n* Experience with machine learning or interest in picking it up\n* Fluent in English\n* Excellent written and verbal communication skills\n* Ability to work independently, build good working relationships and actively communicate, contribute, and speak up in a remote work structure\n* Curiosity and a desire to keep learning\n* Commitment to consumer privacy and security\n\nNice to have (but not required):\n* Experience with contributing to or maintaining open source software\n* Experience with web crawling\n* Experience with Docker\n \n\n#Salary or Compensation\n100000 -120000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at Creative Commons

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Good Eggs


verified closed
🇺🇸 US-only

Senior Data Platform Engineer


Good Eggs

🇺🇸 US-only

data

 

snowflake

 

dbt

 

devops

 

data

 

snowflake

 

dbt

 

devops

 
This job post is closed and the position is probably filled. Please do not apply.
At Good Eggs, we believe feeding your family well shouldn’t come with a trade off — be it your time, your standards, or your wallet. We’re pioneering a new way to fill your fridge, by sourcing the best food from producers we know and trust, and bringing it straight to you — all at a price the same or less than your grocery store.\n\nWe run a healthy agile engineering process with:\n\n* pair programming\n* test-driven development\n* continuous deployment\n\n\n# Responsibilities\n We're looking for a Data Platform Engineer who is interested in a multidisciplinary engineering environment and is excited to support the culture of data alongside a passionate, mission-driven team.\n\nAs a Data Platform Engineer, you'll work on ingest, modeling, warehousing, BI tools, and have significant influence over the tools & processes we deliver to our customers (Analysts, Engineers, Business Leaders). We have a modern data platform and a strong team of DevOps Engineers and Full-Stack Data Analysts to collaborate with. Some of the tech involved:\n\n* custom code written in multiple languages (primarily Node.js/Typescript, but also Python and Go)\n* Fivetran & Segment\n* Snowflake\n* dbt\n* Mode Analytics\n* a modern, AWS-based, containerized application platform \n\n# Requirements\n**Ideal candidates will have:**\n* A desire to use their talents to make the world a better place\n* 2+ years of agile software development experience including automated testing and pair programming\n* 3+ years of full time, Data experience (ETL, warehousing, modeling, supporting Analysts)\n* interest in learning and adopting new tools and techniques\n* Bachelor’s degree in computer science, computer engineering or equivalent experience\n\n**Experience in some of the following areas:**\n* Node.js/Typescript, Go, Python, SQL\n* DevOps, cloud infrastructure, developer tools\n* Container-based deployments, microservice architecture\n\n**Bonus points for:**\n* Previous work experience involving e-commerce, physical operations, finance, or BizOps\n* Being data-driven - ability to get insights from data\n* Experience with dimensional modeling and/or BEAM*\n\n#Location\n🇺🇸 US-only


See more jobs at Good Eggs

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
**The Company**\nSynergy Sports Technology, named by Fast Company as one of the world's top 10 most innovative companies in sports, seeks talented **Senior Backend Data Platform Engineers** to join our team on a long term contract basis. \nThis position offers a tremendous opportunity to work with the only company that delivers on-demand professional-level basketball, baseball, and hockey analytics linked to supporting video to nearly 1500 college, professional, and international teams. Our systems are highly complex and contains petabytes of data and video requiring extremely talented engineers to maintain scale and efficiency of its products.\nAs a member of the Synergy team, its engineering team will contribute to the ongoing development of Synergy’s revolutionary online sports data and video delivery solutions. Building applications such as:\n* Client Analytic Tools\n* Video Editing and Capture Tools\n* Data Logging Tools\n* Operational Game, Data and Video Pipeline Tools\n* Backend Data and Video Platforms\n\nSynergy’s work environment is geographically distributed, with employees working from home offices. The successful candidate must be comfortable working in a virtual office using online collaboration tools for all communication and interaction in conversational English. Synergy development staff work in a deadline-oriented, demanding, non-standard environment in which personal initiative and a strong work ethic are rewarded. Good communication skills, self-motivation, and the ability to work effectively with minimal supervision are crucial. Nonstandard working hours may be required, as Synergy operates on a 24x7 system for clients, with associated deadlines and requirements. Pay rate is dependent on experience.\nInformation for all Positions:\n* All Positions will last for roughly a year with some engineers lasting even longer if they are talented, we will keep them for future projects (contracts are renewing every year)\n* Engineers should be available for phone calls M-F from 7am to 10am Pacific Time zone. There will usually be 1 or 2 phone calls each week that are 30 to 90 minutes each. All other work hours availability is up to the engineer to work when it is a best fit and balance for them to communicate with their team and their personal commitments outside of work.\n* Working an average of 40 hours per week is expected except in rare or temporary circumstances. Each week can be flexible and up to the engineer as to when and how much they work per day. It is ok to work heavier and lighter weeks if desired based upon the engineer’s preference of when and how to work. But a preference is to average 40 hours per week.\n* No travel is required\n\n\n\n\n# Responsibilities\n **Team Objectives**\n\nA candidate joining the Data Platform team can expect to work on the following types of projects:\n* Creating internal and external APIs to support both data and video\n* Building complex data models supporting the business rules of sports\n* Developing algorithms that ingesting and transforming multiple streams of data and collapsing the data into a single event structure\n* Refactoring code to a .NET Core environment\n* Scaling out current systems to support new sports\n* Building build and test automation systems\n* Building complex reporting data structures for analytical systems \n\n# Requirements\n**Required Skill Sets**\n* NoSQL database (MongoDB Preferred)\n* C# (Latest version with a preference to .NET Core)\n


See more jobs at Synergy Sports Technology

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
Doximity is transforming the healthcare industry. Our mission is to help doctors save time so they can provide better care for patients.\n\nWe value diversity — in backgrounds and in experiences. Healthcare is a universal concern, and we need people from all backgrounds to help build the future of healthcare. Our data team is deliberate and self-reflective about the kind of team and culture that we are building, seeking data engineers and scientists that are not only strong in their own aptitudes but care deeply about supporting each other's growth. We have one of the richest healthcare datasets in the world, and our team brings a diverse set of technical and cultural backgrounds.\n\nYou will join a small team of Software Engineers focusing on Data Engineering Infrastructure to build and maintain all aspects of our data pipelines, ETL processes, data warehousing, ingestion and overall data stack.\n\n**How you’ll make an impact:**\n\n* Help establish robust solutions for consolidating data from a variety of data sources.\n* Establish data architecture processes and practices that can be scheduled, automated, replicated and serve as standards for other teams to leverage. \n* Collaborate extensively with the DevOps team to establish best practices around server provisioning, deployment, maintenance, and instrumentation.\n* Build and maintain efficient data integration, matching, and ingestion pipelines.\n* Build instrumentation, alerting and error-recovery system for the entire data infrastructure.\n* Spearhead, plan and carry out the implementation of solutions while self-managing.\n* Collaborate with product managers and data scientists to architect pipelines to support delivery of recommendations and insights from machine learning models.\n\n**What we’re looking for:**\n\n* Fluency in Python, SQL mastery.\n* Ability to write efficient, resilient, and evolvable ETL pipelines. \n* Experience with data modeling, entity-relationship modeling, normalization, and dimensional modeling.\n* Experience building data pipelines with Spark and Kafka.\n* Comprehensive experience with Unix, Git, and AWS tooling.\n* Astute ability to self-manage, prioritize, and deliver functional solutions.\n\n**Nice to have:**\n\n* Experience with MySQL replication, binary logs, and log shipping.\n* Experience with additional technologies such as Hive, EMR, Presto or similar technologies.\n* Experience with MPP databases such as Redshift and working with both normalized and denormalized data models.\n* Knowledge of data design principles and experience using ETL frameworks such as Sqoop or equivalent. \n* Experience designing, implementing and scheduling data pipelines on workflow tools like Airflow, or equivalent.\n* Experience working with Docker, PyCharm, Neo4j, Elasticsearch, or equivalent. \n\n**About Doximity**\n\nWe’re thrilled to be named the Fastest Growing Company in the Bay Area, and one of Fast Company’s Most Innovative Companies. Joining Doximity means being part of an incredibly talented and humble team. We work on amazing products that over 70% of US doctors (and over one million healthcare professionals) use to make their busy lives a little easier. We’re driven by the goal of improving inefficiencies in our $2.5 trillion U.S. healthcare system and love creating technology that has a real, meaningful impact on people’s lives. To learn more about our team, culture, and users, check out our careers page, company blog, and engineering blog. We’re growing fast, and there’s plenty of opportunities for you to make an impact—join us!\n\n*Doximity is proud to be an equal opportunity employer, and committed to providing employment opportunities regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, pregnancy, childbirth and breastfeeding, age, sexual orientation, military or veteran status, or any other protected classification. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.* \n\n# Requirements\nUse apply button


See more jobs at Doximity

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.

Air Miles

 

closed

Data Engineer  


Air Miles


engineer

 

data

 

engineer

 

data

 
This job post is closed and the position is probably filled. Please do not apply.
\nThe AIR MILES Rewards Program has earned the trust and support of more than two-thirds of Canadian households. For over two decades, we have helped our Partners use Canada’s most widely accepted loyalty currency, AIR MILES® reward miles, to influence customer behaviour, drive profitability, and build long-term relationships. \n\nBenefits and Perks at AIR MILES:\n\n\n* Flexible Work Arrangements\n\n* Tuition Reimbursement\n\n* COVID-19 Work-from-Home safety response\n\n* Annual Wellness Subsidy\n\n* AIR MILES Gold® Collector\n\n* Loyalty Days and Anniversary Miles\n\n* Group RRSPs & Company match\n\n* Wellness Resources including Cognitive Therapy\n\n* Recognized as Canada’s Top Employer\n\n\n\n\nThere’s a reason we’re recognized as one of the best places to work year after year: We give you more than a place to work, we give you a place to grow your career. That’s what sets us apart.\n\nWhat Will You Work On?\n\nAs part of the Data Hub team at AIR MILES, you will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flows and collection for cross-functional teams. The pipeline needs to be scalable, repeatable, and secure. You will work with some of the largest and most varied data sets (both batch and real-time) in Canada. \n\nHow Will You Create Impact?\n\nYou will expand and develop the AIR MILES Cloud analytical platform that enables business users, data analysts and data scientists to make data-driven decisions, build innovative data products and roll out advanced analytics. \n\nLet's Talk About You!\n\n\n* Ability and desire to work in our collaborative environment: open team room, pair programming and fluid interactions with all products and operations teams.\n\n* Focusing on building solutions utilizing an agile approach: close relationships with Product Managers, communicating and digesting real time feedback, and working smartly to build story cards on daily basis.\n\n* Passionate about Big Data and the latest trends and developments. We strongly believe in and encourage continuous learning.\n\n* You are self-driven, need minimal supervision and comfortable pushing your own projects and getting things done.\n\n* Experience with Python, Spark, and SQL\n\n* Experience building ‘big data’ pipelines, architectures, and datasets\n\n* Experience with Amazon AWS and other cloud platforms\n\n* Experience with Databricks\n\n* Experience with Agile methodologies as well as familiar with CI/CD tools (Jenkins, Travis, github)\n\n* Experience in ETL and Data Modeling preferred\n\n* Experience in designing and implementing streaming applications is preferred\n\n* Fully understand standard architecture methodologies, processes and best practices\n\n\n\n\nOur COVID-19 Response\n\nThe well-being of our Associates is our top priority. Since March 2020, we made the decision to ask all Associates to work from home until further notice. Everyone is set up with the tools and resources needed to keep us all connected and make work-from-home routines more comfortable. We continue to follow the guidance of the provinces, municipalities & public health agencies, as well as consider the safety, health and interests of our Associates, as we assess and make decisions on reopening our office locations.\n\nCheck us out – AIR MILES, a LoyaltyOne Company on StackOverflow | LinkedIn |Glassdoor| Facebook |\n\nTwitter | Instagram LoyaltyOne Culture | Instagram AIR MILES


See more jobs at Air Miles

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
128ms