Open Startup
RSS
API
Remote HealthPost a job

find a remote job
work from anywhere

Get a  email of all new Remote Data Science + Python Jobs

Subscribe
×

👉 Hiring for a Remote Data Science + Python position?

Post a job
on the 🏆 #1 Remote Jobs board

Remote Health by SafetyWing


Global health insurance for freelancers & remote workers

Remote Health by SafetyWing


Global health insurance for freelancers & remote workers

Shopify


This position is a Remote OK original posting verified closed
Canada, United States

Staff Data Scientist


Shopify

Canada, United StatesOriginally posted on Remote OK

data scientist

 

big data

 

data scientist

 

big data

 

object oriented programming

This job post is closed and the position is probably filled. Please do not apply.
**Company Description**\n\nData is a crucial part of Shopify’s mission to make commerce better for everyone. We organize and interpret petabytes of data to provide solutions for our merchants and stakeholders across the organization. From pipelines and schema design to machine learning products and decision support, data science at Shopify is a diverse role with many opportunities to positively impact our success.\nOur Data Scientists focus on pushing products and the business forward, with a focus on solving important problems rather than specific tools. We are looking for talented data scientists to help us better understand our merchants and buyers so we can help them on their journey.\n\n**Job Description**\n\nDo you get excited about all things Data? Are you looking for a role where you can see the tangible results of your work? If you're excited by solving hard, impactful problems and you have a passion for logistics then our Staff Data Scientist may be right for you.\n\n**Qualifications**\n\n* 7-10 years of commercial experience in a data science role solving high impact business problems\n* You have well built technical experience that inspires individual contributors creativity and innovation.\n* You have experience with product leadership and technical decision making.\n* You have been working closely with various levels of business stakeholders, from c-suite and down.\n* You can jump into the code on a deep level, but are also able to contribute to long term initiatives by mentoring your team.\n* Multiple work streams excites you, you are able to use ambiguity as an opportunity for high level thinking.\n* Experience creating data product strategies, data products, iterating after launch, and trying again.\n* Extensive experience using Python including a strong grasp of object oriented programming (OOP) fundamentals.\n\n**What would be great if you have:**\n\n* Previous experience using Spark.\n* Experience with statistical methods like regression, GLMs or experiment design and analysis.\n* Exposure to Tableau, QlikView, Mode, Matplotlib, or similar data visualization tools.\n\n**Additional information**\n\nIf you’re interested in helping us shape the future of commerce at Shopify, click the “Apply Now” button to submit your application. Please submit a resume and cover letter with your application. Make sure to tell us how you think you can make an impact at Shopify, and what drew you to the role.\n\n#Location\nCanada, United States


See more jobs at Shopify

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Shopify


This position is a Remote OK original posting verified closed
United States, Canada

Senior Data Scientist


Shopify

United States, CanadaOriginally posted on Remote OK

senior

 

data scientist

 

senior

 

data scientist

 

azure

This job post is closed and the position is probably filled. Please do not apply.
**Company Description**\n\nShopify is now permanently remote and working towards a future that is digital by default. Learn more about what this can mean for you.\n\nAt Shopify, we build products that help entrepreneurs around the world start and grow their business. We’re the world’s fastest growing commerce platform with over 1 million merchants in more than 175 different countries, with solutions from point-of-sale and online commerce to financial, shipping logistics and marketing.\n\n**Job Description**\n\nData is a crucial part of Shopify’s mission to make commerce better for everyone. We organize and interpret petabytes of data to provide solutions for our merchants and stakeholders across the organization. From pipelines and schema design to machine learning products and decision support, data science at Shopify is a diverse role with many opportunities to positively impact our success. \n\nOur Data Scientists focus on pushing products and the business forward, with a focus on solving important problems rather than specific tools. We are looking for talented data scientists to help us better understand our merchants and buyers so we can help them on their journey.\n\n**Responsibilities:**\n\n* Proactively identify and champion projects that solve complex problems across multiple domains\n* Partner closely with product, engineering and other business leaders to influence product and program decisions with data\n* Apply specialized skills and fundamental data science methods (e.g. regression, survival analysis, segmentation, experimentation, and machine learning when needed) to inform improvements to our business\n* Design and implement end-to-end data pipelines: work closely with stakeholders to build instrumentation and define dimensional models, tables or schemas that support business processes\n* Build actionable KPIs, production-quality dashboards, informative deep dives, and scalable data products\n* Influence leadership to drive more data-informed decisions\n* Define and advance best practices within data science and product teams\n\n**Qualifications**\n\n* 4-6 years of commercial experience as a Data Scientist solving high impact business problems\n* Extensive experience with Python and software engineering fundamentals\n* Experience with applied statistics and quantitative modelling (e.g. regression, survival analysis, segmentation, experimentation, and machine learning when needed)\n* Demonstrated ability to translate analytical insights into clear recommendations and effectively communicate them to technical and non-technical stakeholders\n* Curiosity about the problem domain and an analytical approach\n* Strong sense of ownership and growth mindset\n \n**Experience with one or more:**\n\n* Deep understanding of advanced SQL techniques\n* Expertise with statistical techniques and their applications in business\n* Masterful data storytelling and strategic thinking\n* Deep understanding of dimensional modelling and scaling ETL pipelines\n* Experience launching productionized machine learning models at scale\n* Extensive domain experience in e-commerce, marketing or SaaS\n\n**Additional information**\n\nAt Shopify, we are committed to building and fostering an environment where our employees feel included, valued, and heard. Our belief is that a strong commitment to diversity and inclusion enables us to truly make commerce better for everyone. We strongly encourage applications from Indigenous people, racialized people, people with disabilities, people from gender and sexually diverse communities and/or people with intersectional identities. Please take a look at our 2019 Sustainability Report to learn more about Shopify's commitments.\n\n#Location\nUnited States, Canada


See more jobs at Shopify

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Shopify

 This job is getting a relatively high amount of applications currently (11% of viewers clicked Apply)

This position is a Remote OK original posting verified closed
United States, Canada

Staff Data Scientist  This job is getting a relatively high amount of applications currently (11% of viewers clicked Apply)


Shopify

United States, CanadaOriginally posted on Remote OK

ecommerce

 

ecommerce

 

big data

This job post is closed and the position is probably filled. Please do not apply.
**Company Description**\n\nData is a crucial part of Shopify’s mission to make commerce better for everyone. We organize and interpret petabytes of data to provide solutions for our merchants and stakeholders across the organization. From pipelines and schema design to machine learning products and decision support, data science at Shopify is a diverse role with many opportunities to positively impact our success.\nOur Data Scientists focus on pushing products and the business forward, with a focus on solving important problems rather than specific tools. We are looking for talented data scientists to help us better understand our merchants and buyers so we can help them on their journey.\n\n**Job Description**\n\nDo you get excited about all things Data? Are you looking for a role where you can see the tangible results of your work? If you're excited by solving hard, impactful problems and you have a passion for logistics then our Staff Data Scientist may be right for you.\n\n**Qualifications**\n\n* 7-10 years of commercial experience in a data science role solving high impact business problems\n* You have well built technical experience that inspires individual contributors creativity and innovation.\n* You have experience with product leadership and technical decision making.\n* You have been working closely with various levels of business stakeholders, from c-suite and down.\n* You can jump into the code on a deep level, but are also able to contribute to long term initiatives by mentoring your team.\n* Multiple work streams excites you, you are able to use ambiguity as an opportunity for high level thinking.\n* Experience creating data product strategies, data products, iterating after launch, and trying again.\n* Extensive experience using Python including a strong grasp of object oriented programming (OOP) fundamentals.\n\n**What would be great if you have:**\n\n* Previous experience using Spark.\n* Experience with statistical methods like regression, GLMs or experiment design and analysis.\n* Exposure to Tableau, QlikView, Mode, Matplotlib, or similar data visualization tools.\n\n**Additional information**\n\nIf you’re interested in helping us shape the future of commerce at Shopify, click the “Apply Now” button to submit your application. Please submit a resume and cover letter with your application. Make sure to tell us how you think you can make an impact at Shopify, and what drew you to the role.\n\n#Location\nUnited States, Canada


See more jobs at Shopify

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply.
**Remote (Europe) | Full- time & Permanent role | Outdoor navigation app | 15+M users | 70 employees + 40 freelances | squad structure**\n\n**Senior Backend Developer/ Data Scientist @ komoot (Kotlin/Java/Scala) & Python**\[email protected] komoot We are building the best outdoor navigation planner and the biggest outdoor community and simply inspire people to explore more of the outdoors.\n\n**15million users | 100k 5* reviews**\n\nWe are now searching for a** Senior Backend Engineer/ Data Scientist** who would like to take their skills to the next level, love to find simple and smart solutions to complex problems and focuses on delivering impact.\n\n Komoot possesses a unique dataset of **user-generated content, ranging from GPS data from tours, uploaded photos, and tips, to implicit and explicit user feedback**. To get the biggest value from this raw data, we need your excellent software and analytical skills. \n The challenges include automatic evaluation and classification of our user-generated content as well as innovative approaches to assembling them into consumable inspiration for users (e.g. auto-generated tour suggestions tailored to users’ sport and location). \n You’ll wrap up your algorithms into clean and scalable microservices in our modern cloud environment. \n We believe that innovations based on data science will reinforce and extend our leadership in the outdoor market and your role will be decisive for komoot’s success\n\n**What you will do**\n* Work closely with our data scientists, web and mobile developers, designers, copywriters and product managers\n* Discuss product improvements, technical possibilities and road maps\n* Investigate and evaluate data science approaches for product enhancements (you can count on the support of experienced data scientists)\n* Turn prototypes into resilient and scalable REST APIs and background workers\n* Deploy and monitor your code in our AWS Cloud\n\n**You will be successful in this position if you**\n* Have a passion for finding pragmatic and smart solutions to complex problems\n* Have 3+ years of professional experience in backend development in the cloud\n* Have 2+ years of professional experience in Kotlin/Java/Scala\n* Have 2+ years of professional experience in Python\n* Know SQL and NoSQL\n* Have a solid understanding of math and statistics\n* Have a solid understanding of data science fundamentals\n* Master Jupyter, pandas, NumPy, matplotlib/seaborn, scikit-learn\n* Bonus: BigData experience (Spark, Presto, Hadoop)\n* Bonus: Infrastructure as Code\n* Bonus: Deployment, CI and monitoring\n* Have strong communication and team skills\n* Have a hands-on attitude and are highly self-driven\n\nFor more information you can click the apply link which will direct you to our website.\nTo find out more about how we recruit people/ salary etc you can follow this link: https://www.komoot.com/jobs-process\n\n\n#Location\n🇪🇺 EU-only


See more jobs at komoot

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Sporty


This position is a Remote OK original posting verified closed
🌏 Worldwide

Business Intelligence Data Analyst


Sporty

🌏 WorldwideOriginally posted on Remote OK

mysql

 

oracle

 

sql

 

mysql

 

oracle

 

sql

 

metabase

This job post is closed and the position is probably filled. Please do not apply.
### Company Overview\nSporty is a mobile internet company with a focus on emerging markets. Our integrated sports media, betting, gaming and social platform serves a huge userbase across numerous countries. We have a talented and proven team of 200+ people comprised of 50+ tech staff and 150+ product, operations and support, and are looking to expand our tech team count to 100+ people as we look to drive further geographical expansion, whilst iterating on our offering with a user-driven development approach.\n\n### Tech Stack\n* Oracle\n* mySQL\n* AWS/RDS\n* Python\n* Metabase\n\n### Benefits\n* Competitive salary \n* Quarterly bonuses \n* Flash bonuses \n* Top-of-the-line equipment \n* Pick your own working hours (We are at GMT+8, and 4 hours overlap is required.)\n* 20-days paid leave \n* Referral program \n* Education allowance (conferences, books, training courses, Udemy, Coursera, etc.) \n* Annual company trips (eg next year Koh Samui, Thailand) \n* Small enough to allow you to have a big impact \n* Large enough to provide structure and clarity \n* Highly-talented, dependable co-workers \n* Global, multi-cultural organization\n# Responsibilities\n* Delivering clear analysis and reporting of core business metrics to shareholders\n* Creation and management of reports and dashboards\n* Monitoring the performance of key business products\n* Providing actionable insight to drive growth of core products\n* Extraction and analysis of large data sets from MySQL/Oracle DBs\n* Ad hoc analysis and reporting to clients and shareholders\n\n# Requirements\n* Delivering clear analysis and reporting of core business metrics to shareholders\n* Creation and management of reports and dashboards\n* Monitoring the performance of key business products\n* Providing actionable insight to drive growth of core products\n* Extraction and analysis of large data sets from MySQL/Oracle DBs\n* Ad hoc analysis and reporting to clients and shareholders \n\n#Salary and compensation\n$20,000 — $70,000/year\n\n\n#Location\n🌏 Worldwide


See more jobs at Sporty

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.

Kokoon Technology Ltd


This position is a Remote OK original posting closed
🇪🇺 EU-only

Lead Backend Developer


Kokoon Technology Ltd

🇪🇺 EU-onlyOriginally posted on Remote OK

ruby on rails

 

postgresql

 

ruby on rails

 

postgresql

 
This job post is closed and the position is probably filled. Please do not apply.
The role: The successful candidate will need the ability to contribute and take ownership on a variety of challenges associated with the backend development. They will be working closely with our small and passionate in-house product development team who focus in different areas including: product design, data science, bio-sensing, IoT and audio systems. The Backend Developer role involves taking features from conceptualisation and prototyping through to production and maintenance, it will ideally suit someone with broad production experience using Ruby on Rails/ Dockers/ PostgreSQL and Python. Ruby/Rails will be the most important area of experience for this role. We expect the candidate to be able to bring their expertise for designing for maintainability, reliability and scalability to the fore. The role suits a candidate who has excellent existing backend technology knowledge but also maintains a thirst for expanding their expertise, can concisely communicate their ideas to cross-functional team members and can balance their workload to effectively meet the needs of the business. We are considering remote workers for the role, so ability to do this effectively is important.\n\nA bit about the backend: The backend manages all audio, bio-sensor and usage data, sensor data processing and (ML based) algorithms, personalised audio recommendation system and is used for ongoing data science and research purposes. Ruby on Rails is used for the main backend app.\n\nAbout the Company:\n\nKokoon Tech is an award winning, venture capital backed, health and wellness start-up based in London with a mission to help the world relax and sleep better. Kokoon Tech are doing this by gaining a uniquely deep understanding of the link between relaxation/sleep and audio content. Their apps, associated backend systems and sensor-enabled products provide adaptive audio content and coaching clinically shown to induce and protect relaxation. After shipping their first headphone product in 2018 they have shipped over 27,000 units to over 50 countries. \n\nBenefits\n• Competitive salary\n• Generous share option \n• Pension scheme\n• Friendly/dynamic team\n \n\n# Requirements\nRole experience requirements:\n• 5+ years professional Backend Development experience with Ruby on Rails.\n• Active Admin\n• PostgreSQL\n• Containerisation (Docker)\n• Data migration and analysis\n• Sidekiq job scheduling (or equivalent)\n• Python \n• RSpec and preparing Integration and end-to-end tests (or equivalent)\n• System monitoring tools (such as New Relic/Airbrake)\n• Degree in Software/Computer Science (or similar technical/engineering degree) or proven experience in development and design of backend applications.\n• Full software development lifecycle experience from design phase through to production and maintenance.\n• Good understanding of design patterns\n• Excellent professional skills.\n• Experience with Github SCM\n\nDesirable skills/experience include\n• DevOps\n• Machine Learning\n• Data Science \n\n#Salary and compensation\n$64,000/year\n\n\n#Location\n🇪🇺 EU-only


See more jobs at Kokoon Technology Ltd

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Iterative


This position is a Remote OK original posting verified closed
🌏 Worldwide

Senior Software Engineer


Iterative

🌏 WorldwideOriginally posted on Remote OK

open source

 

dev tools

 

open source

 

dev tools

 

dvc

This job post is closed and the position is probably filled. Please do not apply.
We're a well-funded team building open source tools for ML workflow. Our first and core project, [DVC](https://github.com/iterative/dvc), has excellent traction and positive feedback from community. Our mission as a company is to build tools for engineers and data scientists that they love to use.\n\n**Our culture/what we offer:**\n\n- Team is distributed remotely worldwide.\n- Highly competitive salary, stock options, and bonuses.\n- Open source.\n- Founders and team with strong engineering, data science, and open source experience. We all code and understand engineering first-hand.\n- Engineering team is involved into product discussions and planning. We do it openly via [Github](https://github.com/iterative/dvc) or Discord [chat](https://dvc.org/chat).\n- Besides building the product we participate in conferences (PyCon, PyData, O'Reilly AI, etc). We encourage and support the team in giving talks, writing blog-posts, etc.\n- Well-defined process that we all participate in improving.\n\n**Join us if you like us love building open source and developer tools!**\n\n# Responsibilities\n - Discuss and research issues, features, new products.\n- Write code (see some [PR examples](https://github.com/iterative/dvc/pulls?q=is%3Apr+is%3Aclosed) ).\n- Write docs if needed for your code (see this [repo](https://github.com/iterative/dvc.org )).\n- Being actively involved with the community - talk to users on Github, Discord, forum. \n\n# Requirements\nStrong Python knowledge and excellent coding culture (standards, unit test, etc) are required. Alternatively, strong skill in other languages along with some knowledge of Python is also acceptable.\n\n**Must have:**\n\n- Motivation and interest\n- Remote work self-discipline\n- Excellent communication skills - clear, constructive, and respectful dialog with other team members, community.\n- Can focus and deliver a task w/o constantly switching to other stuff - respect team's planning, deadlines, etc\n\n**Great to have:**\n\n- Experience working remotely\n- Open source contributions or experience of maintaining, developing an open source project\n- System programming experience - kernel, databases, etc.\n- Machine learning or data science experience \n\n#Salary and compensation\n$1/year\n\n\n#Location\n🌏 Worldwide


See more jobs at Iterative

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

komoot


This position is a Remote OK original posting verified closed
Utc-1 to Utc+3

Senior Data Analyst


komoot

Utc-1 to Utc+3Originally posted on Remote OK

sql

 

data_analyst

 

sql

 

data_analyst

 

pandas

This job post is closed and the position is probably filled. Please do not apply.
**Millions of people experience real-life adventures with our apps. We help people all over the world discover the best hiking and biking routes, empowering our users to explore more of the great outdoors. And we’re good at it: Google and Apple have listed us as one of their Apps of the Year numerous times—and we are consistently ranked amongst the highest-grossing apps in both Google Play and the App Store. \n\nWe’re looking for an experienced and curious data analyst to help drive product decisions and help us develop the company strategy. We believe that data-informed decision making is key to our success and your skills, curiosity, and experience will play a crucial role in building the future of outdoor experiences.**\n\nWhy you will love it\n* You’ll be a key player in making product development, marketing, and business decisions\n* You’ll work closely with komoot’s co-founder and influence the future direction of komoot\n* You’ll influence and be responsible for the future development of analytics at komoot, technically an organizationally\n* You’ll work in a fast-paced startup with strongly motivated and talented co-workers\n* You’ll enjoy the freedom to organize yourself the way you want \n* We let you work from wherever you want, be it a beach, the mountains, your house or anywhere else that lies in any time zone situated between UTC-1 and UTC+3\n* You’ll travel together with our team to amazing outdoor places several times a year to exchange ideas, learnings and go for hikes and rides\n\n\n\n# Responsibilities\n What you will do\n* Turn data into actionable insights\n* Develop analysis to drive product and business decisions and company strategy\n* Design and implement metrics, dashboards, and continuous reports\n* Communicate and discuss findings to different audiences (Co-founders, marketing, product, sales, etc.)\n* Design and lead bigger analytics project to answer komoot’s key questions\n* Organize and prioritize tasks of our data analytics roadmap\n* Be an evangelist of data-informed decision-making\n \n\n# Requirements\nYou will be successful in this position if you\n* Have a burning desire to transform data into actionable insights\n* Have 3+ years of experience in evaluating marketing campaigns, cohort analysis, A/B testing and retention\n* Fluent in SQL and Python’s data analytics libraries (pandas, numpy, matplotlib)\n* Have strong communication and team skills\n* Familiarity with statistical concepts\n* Have a good overview of technical solutions in the data analytics field\n* Have a hands-on attitude and are highly self-driven\n\n\n#Location\nUtc-1 to Utc+3


See more jobs at komoot

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Customer.io


This position is a Remote OK original posting closed

Data Analyst


Customer.io

Originally posted on Remote OK

sql

 

data analyst

 

sql

 

data analyst

 

numbers

This job post is closed and the position is probably filled. Please do not apply.
Hi, I’m Colin, CEO of Customer.io.\n\nCustomer.io is looking for a **Data Analyst** to help our company understand what’s happening in the business through data. You’ll be independent, working outside of any one team so that you’re an unbiased voice exploring our data to answer questions for the rest of the company. \n\nAt Customer.io, data helps inform many decisions, but it’s one factor alongside other things like customer research, our experience, and intuition. You *won’t* find us running experiments to figure out which of 50 shades of blue is the best one for a button color!\n\nYou’ll work alongside 40 people from around the world working each day to help businesses build great experiences and strong relationships with their customers. Over 1100 businesses use Customer.io as a core part of their website or mobile app to send emails, push notifications, SMSes, and more to their audience. The insight you’ll provide will help everyone in our company do their job better and impact the lives of the millions of people downstream who receive messages from our customers. \n\n# Responsibilities\n **As the Data Analyst at Customer.io, you’ll work on projects like:**\n* Helping our Product Team measure usage and uptake of our new Push Notifications feature.\n* Helping our Marketing Team understand how their work doubling our number of monthly trials (nice work Marketing!) is leading to downstream success in creating new, long term customers. \n* Working with the CEO (that’s me) to create dashboards to show our key company metrics that we’ll use to make strategic decisions.\n* Analyzing whether or not a pricing change we’ve made has paid off. \n\n\n\n**How you’ll work at Customer.io**\n\nA good way to think about this role is as a consultant serving multiple internal clients. You’ll be often collaborating with our Leadership, Product, Marketing, and Sales Teams. An effective person will be able to balance multiple internal clients, get to the root of what each person is asking for, and allocate enough time to do the work that delivers insights.\n\nYou’ll build dashboards and reports on top of the data pipeline that you’ll ensure reflects our business as accurately as possible.Don’t worry though, our SRE team keeps the data warehouse healthy, and back end engineers can jump in as needed to help you out.\n\nYou control your schedule and how you do your work, but the majority of our team is in North American time zones and as long as you can have a few hours of overlap each day, we don’t mind where you are.\n \nSince our team is distributed, almost all of our communication is written. Especially in the Data Analyst role, clear writing is critical to educating others about your findings.\n \n\n# Requirements\n**You’ve probably got many of these:**\n* You’re fluent in using SQL to query and transform data.\n* You have experience communicating complex analyses to a non-technical audience\n* An understanding of how to present data for understanding and accuracy\n\n**Bonus, but not required (shout about it in your application if you have experience):**\n* Marketing Technology and/or SaaS Company experience - if you’ve built reports showing CAC to LTV calculations that’s great but you can learn that too.\n* Experience using Python for data analysis\n* You’ve used tools in our stack like: DBT, Mode, Looker, Stitch Data, Amazon Redshift\n\n**Why should you work with Customer.io?**\n\nWork anywhere in the world you want. We want to enable you to do your best work, and this is how we aim to do that:\n\n**Equity**- You'll own a piece of the company that you help create. You’ll be recognized for your\ncontributions by earning equity in the company.\n\n**Big Impact**- Our team is small, but growing quickly. The work you do will materially impact how successful we are as a company.\n\n**Great Tools** - Everyone in the company has an Apple computer and is given a budget for a motorized standing desk, Steelcase Leap office chair, external monitor, and anything else you'd like to get your job done.\n\n**Health Benefits** - We pay 100% of your premiums for you and your family for medical, dental, and vision,. We also cover employee’s short-term disability, long-term disability, and life insurance. \n\n**Paid Parental & Medical Leave** - Including adoption.\n\n**Retreats** - We get our whole company together twice a year. Recent retreats have been in Quebec City, Chicago, and Iceland. \n\n**Vacation**- Rest and recuperation are essential. We offer 4 weeks of paid vacation plus the standard local holidays where you are, and flexibility if you need an extra day here and there.\n\n***Diversity statement***\n\nAt Customer.io, we’re committed to building a diverse environment and encourage applicants from underrepresented groups. We want people with different backgrounds from the team we have today to bring their perspective and thoughtfulness to the work that we do and the culture we foster.\nCome Join Us!\n\n\nP.S. Hey, this is Colin again. My first job in tech was as a product manager when I had next to no experience. Somebody took a chance on me when I felt like a total imposter. I wouldn’t have gotten that opportunity if I never applied. I hope you apply too.\n\n


See more jobs at Customer.io

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
Doximity is transforming the healthcare industry. Our mission is to help doctors save time so they can provide better care for patients.\n\nWe value diversity — in backgrounds and in experiences. Healthcare is a universal concern, and we need people from all backgrounds to help build the future of healthcare. Our data science team is deliberate and self-reflective about the kind of data team and culture that we are building, seeking data scientists and engineers that are not only strong in their own aptitudes but care deeply about supporting each other's growth. We have one of the richest healthcare datasets in the world, and our team brings a diverse set of technical and cultural backgrounds.\n\nDoximity’s social network has over 70% of US doctors as members. We recently launched a newsfeed which helps physicians navigate the latest practice-changing medical literature, parsing through millions of scientific studies while personalizing content to their patient base and every stage of their medical career. We’re looking for a deep learning expert to help our data science and engineering teams build models in support of the newsfeed, combining metadata from our clinician social graph with text-based features.\n\nHow you’ll make an impact:\n\n-Apply NLP methods and build deep neural networks to personalize medical content to our clinician -members.\n-Optimize the layout of different content types (e.g., journal articles, news articles, video, social updates) within the newsfeed.\n-Collaborate with data engineers to devise plans to refresh the newsfeed and surface optimized content to users near real-time.\n-Provide technical leadership to other data scientists and data engineers working on the newsfeed product.\n\nWhat we’re looking for:\n\n-4+ years of industry experience and M.S./Ph.D. in Computer Science, Engineering, Statistics, or other relevant technical field.\n-4+ years experience with various machine learning methods (classification, clustering, natural language processing, ensemble methods) and parameters that affect their performance.\n-Experience building deep neural networks in support of recommendation systems.\n-Solid engineering skills to build scalable solutions and help automate data processing challenges.\n-Desire to provide technical leadership and mentorship to other data team members.\n-Expert knowledge of probability and statistics (e.g., experimental design, optimization, predictive modeling).\n-Excellent problem-solving skills and ability to connect data science work to product impacts.\n-Fluent in SQL and Python; experience using Apache Spark (pyspark).\n-Experience working with relational and non-relational databases.\n\nNice to have:\n\n-Experience applying reinforcement learning to industry problems.\n\nAbout Doximity\n\nWe’re thrilled to be named the Fastest Growing Company in the Bay Area, and one of Fast Company’s Most Innovative Companies. Joining Doximity means being part of an incredibly talented and humble team. We work on amazing products that over 70% of US doctors (and over one million healthcare professionals) use to make their busy lives a little easier. We’re driven by the goal of improving inefficiencies in our $2.5 trillion U.S. healthcare system and love creating technology that has a real, meaningful impact on people’s lives. To learn more about our team, culture, and users, check out our careers page, company blog, and engineering blog. We’re growing fast, and there’s plenty of opportunity for you to make an impact—join us!\n\nDoximity is proud to be an equal opportunity employer, and committed to providing employment opportunities regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, pregnancy, childbirth and breastfeeding, age, sexual orientation, military or veteran status, or any other protected classification. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.


See more jobs at Doximity

Visit Doximity's website

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.

Doximity


This position is a Remote OK original posting verified closed

Senior Data Scientist Graph Analytics


Doximity

Originally posted on Remote OK

machine learning

 

machine learning

 

senior

This job post is closed and the position is probably filled. Please do not apply.
Doximity is transforming the healthcare industry. Our mission is to help doctors save time so they can provide better care for patients.\n\nWe value diversity — in backgrounds and in experiences. Healthcare is a universal concern, and we need people from all backgrounds to help build the future of healthcare. Our data science team is deliberate and self-reflective about the kind of data team and culture that we are building, seeking data scientists and engineers that are not only strong in their own aptitudes but care deeply about supporting each other's growth. We have one of the richest healthcare datasets in the world, and our team brings a diverse set of technical and cultural backgrounds.\n\nDoximity’s social network has over 70% of US doctors as members and 40M connections among them. We currently are looking for a data scientist to lead a team of other data scientists, data engineers and data analysts with the goal of utilizing this medical graph to better connect and promote discussion among our clinician members.\n\nHow you’ll make an impact:\n\n-Build link prediction models to recommend colleagues for our members to connect and engage with (the “people you may know” problem) to increase user engagement across Doximity’s platforms.\n-Leverage data from Doximity’s social graph to improve the user personalization models that recommend medical and social content to our clinician members.\n-Discover trends and new business opportunities from physician’s medical claims and referral networks.\n-Provide technical leadership to other data scientists and data engineers working on our growth and colleaguing product teams.\n\nWhat we’re looking for:\n\n-4+ years of industry experience and M.S./Ph.D. in Computer Science, Engineering, Statistics, or other relevant technical field.\n-4+ years experience with various machine learning methods (classification, clustering, natural language processing, ensemble methods) and parameters that affect their performance.\n-Experience applying graph theory, analytics, and processing to industry problems.\n-Solid engineering skills to build scalable solutions and help automate data processing challenges.\n-Desire to provide technical leadership and mentorship to other data team members.\n-Excellent problem-solving skills and ability to connect data science work to product impacts.\n-Fluent in SQL and Python; experience using Apache Spark (pyspark)\n-Experience working with relational and non-relational databases.\n\nNice to have:\n\n-Experience with neo4j and either Spark GraphX or GraphFrames for performing graph analytics\n\nAbout Doximity\n\nWe’re thrilled to be named the Fastest Growing Company in the Bay Area, and one of Fast Company’s Most Innovative Companies. Joining Doximity means being part of an incredibly talented and humble team. We work on amazing products that over 70% of US doctors (and over one million healthcare professionals) use to make their busy lives a little easier. We’re driven by the goal of improving inefficiencies in our $2.5 trillion U.S. healthcare system and love creating technology that has a real, meaningful impact on people’s lives. To learn more about our team, culture, and users, check out our careers page, company blog, and engineering blog. We’re growing fast, and there’s plenty of opportunity for you to make an impact—join us!\n\nDoximity is proud to be an equal opportunity employer, and committed to providing employment opportunities regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, pregnancy, childbirth and breastfeeding, age, sexual orientation, military or veteran status, or any other protected classification. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.


See more jobs at Doximity

Visit Doximity's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Doximity


This position is a Remote OK original posting verified closed

Data Analyst


Doximity

Originally posted on Remote OK

sql

 

unix

 

sql

 

unix

 

statistics

This job post is closed and the position is probably filled. Please do not apply.
Doximity is transforming the healthcare industry. Our mission is to help doctors save time so they can provide better care for patients.\n\nWe value diversity — in backgrounds and in experiences. Healthcare is a universal concern, and we need people from all backgrounds to help build the future of healthcare. Our data science team is deliberate and self-reflective about the kind of data team and culture that we are building, seeking data analysts that are not only strong in their own aptitudes but care deeply about supporting each other's growth. We have one of the richest healthcare datasets in the world, and our team brings a diverse set of technical and cultural backgrounds.\n\nHow you’ll make an impact:\n\n-Collaborate with a team of product managers, analysts, data scientists, and other developers to define and complete data projects.\n-Show off your coding skills by creating data products from scratch and streamlining/automating existing code.\n-Leverage Doximity's extensive data sets to identify and classify behavioral patterns of medical professionals on our platform.\n-Play a key role in creating both product- and client-facing analytics.\n-Grow into a presentation/communication-focused role or dive deeper into more-involved technical challenges - the choice is yours.\n-Learn from experienced mentors and build your technical and non-technical skill sets.\n\nWhat we’re looking for:\n\n-B.S. or M.S. in quantitative field with 2-4 years of experience\n-Decent knowledge of statistics and visualization\n-Fluent in SQL and python. These skills are an absolute must\n-Comfortable with UNIX command line interface and standard programming tools (vim/emacs, git, etc.)\n-Excellent problem solving skills and a strong attention to detail\n-Ability to manage time well and prioritize incoming tasks from different stakeholders\n-Fast learner; curiosity about and passion for data\n-Prior exposure to machine learning techniques (regressors, classifiers, etc) is a plus\n-Experience leveraging Apache Spark to perform analyses or process data is a plus\n\nAbout Doximity\n\nWe’re thrilled to be named the Fastest Growing Company in the Bay Area, and one of Fast Company’s Most Innovative Companies. Joining Doximity means being part of an incredibly talented and humble team. We work on amazing products that over 70% of US doctors (and over one million healthcare professionals) use to make their busy lives a little easier. We’re driven by the goal of improving inefficiencies in our $2.5 trillion U.S. healthcare system and love creating technology that has a real, meaningful impact on people’s lives. To learn more about our team, culture, and users, check out our careers page, company blog, and engineering blog. We’re growing fast, and there’s plenty of opportunity for you to make an impact—join us!\n\nDoximity is proud to be an equal opportunity employer, and committed to providing employment opportunities regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, pregnancy, childbirth and breastfeeding, age, sexual orientation, military or veteran status, or any other protected classification. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.


See more jobs at Doximity

Visit Doximity's website

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.

Rainforest


This position is a Remote OK original posting closed

Data Science Generalist


Rainforest

Originally posted on Remote OK

machine learning

 

machine learning

 
This job post is closed and the position is probably filled. Please do not apply.
We're looking for a Data Scientist/Machine Learning Engineer with a wide range of competencies. Rainforest is changing the way people do QA and data is at the heart of how we do that. From crowd management and fraud detection to data visualization and automation research, there are myriad opportunities to use your creativity and technical skills.\n\nWe are looking for someone who learns quickly and is a great communicator. Since we are a remote, distributed development team, decent writing skills and(over)communication is important to us. You can be based anywhere in the world (including San Francisco, where our HQ is located).\nWe regularly send our data scientists to conferences, both [to speak](https://www.youtube.com/watch?v=7_h8PElXio8) and just learn (e.g. last year we went to NIPS and KDD, this year to Europython and a couple of [PyDatas](https://www.youtube.com/watch?v=8kL71zk4KNk)). You can read about some of our work on [predicting test run durations](https://www.rainforestqa.com/blog/2017-11-07-how-to-predict-and-reduce-test-execution-time/) and how [Kaggle](https://www.rainforestqa.com/blog/2017-10-24-kaggle-perks-in-real-data-science-work/) can be useful in the real world. And hear some of our team members speak on the topic of Women, Company Culture, and Remote Teams [here](https://www.youtube.com/watch?v=BwKHfPgLcBY). \n\n


See more jobs at Rainforest

Visit Rainforest's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Doximity


This position is a Remote OK original posting verified closed

Senior Data Scientist


Doximity

Originally posted on Remote OK

machine learning

 

machine learning

 

senior

This job post is closed and the position is probably filled. Please do not apply.
Doximity is the leading online medical network with over 70% of U.S. doctors as members. We have strong revenues, profits, real market traction, and we’re putting a dent in the inefficiencies of our $2.5 trillion U.S. healthcare system. After the iPhone, Doximity is the fastest adopted product by doctors of all time. Launched by Jeff Tangney in 2011; Jeff previously founded healthcare pioneer Epocrates (NASDAQ: EPOC). Our beautiful offices are located in SoMa San Francisco.\n##Skills & Requirements\n- 4+ years of industry experience and M.S./Ph.D. in Computer Science, Engineering, Statistics, or other relevant technical field.\n- 4+ years experience with various machine learning methods (classification, clustering, natural language processing, ensemble methods, deep learning) and parameters that affect their performance.\n- Expert knowledge of probability and statistics (e.g., experimental design, optimization, predictive modeling).\n- Experience with recommendation algorithms and strong knowledge of machine learning concepts.\nSolid engineering skills to build scalable solutions and help automate data processing challenges.\n- Excellent problem-solving skills and ability to connect data science work to product impacts.\n- Fluent in SQL and Python; experience using Apache Spark (pyspark) and working with both relational and non-relational databases.\n- Familiarity with AWS, Redshift.\n\n##What you can expect\n- Employ scalable statistical methods and NLP methods to develop machine learning models at scale, owning them from inception to business impact.\n- Leverage knowledge of recommendation algorithms to increase user engagement through personalization of delivered content.\n- Plan, engineer and measure outcomes of online experiments to help guide product development.\n- Collaborate with a team of product managers, analysts, data engineers, data scientists, and other developers.\n- Think creatively and outside of the box. The ability to implement and test your ideas quickly is crucial.\n\n##Technical Stack\n- We historically favor Python and MySQL, but leverage other tools when appropriate for the job at hand.\n- Machine learning (linear/logistic regression, ensemble-models, boosted-models, clustering, NLP, text categorization, user modeling, collaborative filtering, etc) via industry-standard packages (sklearn, nltk, -Spark ML/MLlib, GraphX/GraphFrames, NetworkX, gensim).\n- A dedicated cluster is maintained to run Apache Spark for computationally intensive tasks.\n- Storage solutions: Percona, Redshift, S3, HDFS, Hive, neo4j.\n- Computational resources: EC2, Spark.\n- Workflow management: Airflow.\n\n##Fun facts about the Data Science team\n- We have access to one of the richest healthcare datasets in the world, with deep information on hundreds of thousands of healthcare professionals and their connections.\n- We build code that addresses user needs, solves business problems, and streamlines internal processes.\n- The members of our team bring a diverse set of technical and cultural backgrounds.\n- Business decisions at Doximity are driven by our data, analyses, and insights.\n- Hundreds of thousands of healthcare professionals will utilize the products you build.\n- A couple times a year we run a co-op where you can pick a few people you'd like to work with and drive a specific company goal.\n- We like to have fun - company outings, team lunches, and happy hours!


See more jobs at Doximity

Visit Doximity's website

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.

Doximity


This position is a Remote OK original posting verified closed

Machine Learning Engineer


Doximity

Originally posted on Remote OK

git

 

machine learning

 

git

 

machine learning

 
This job post is closed and the position is probably filled. Please do not apply.
Why work at Doximity?\n\nDoximity is the leading social network for healthcare professionals with over 70% of U.S. doctors as members. We have strong revenues, real market traction, and we're putting a dent in the inefficiencies of our $2.5 trillion U.S. healthcare system. After the iPhone, Doximity is the fastest adopted product by doctors of all time. Our founder, Jeff Tangney, is the founder & former President and COO of Epocrates (IPO in 2010), and Nate Gross is the founder of digital health accelerator RockHealth. Our investors include top venture capital firms who've invested in Box, Salesforce, Skype, SpaceX, Tesla Motors, Twitter, Tumblr, Mulesoft, and Yammer. Our beautiful offices are located in SoMa San Francisco.\n\nSkills & Requirements\n\n-3+ years of industry experience; M.S. in Computer Science or other relevant technical field preferred.\n-3+ years experience collaborating with data science and data engineering teams to build and productionize machine learning pipelines.\n-Fluent in SQL and Python; experience using Spark (pyspark) and working with both relational and non-relational databases.\n-Demonstrated industry success in building and deploying machine learning pipelines, as well as feature engineering from semi-structured data.\n-Solid understanding of the foundational concepts of machine learning and artificial intelligence.\n-A desire to grow as an engineer through collaboration with a diverse team, code reviews, and learning new languages/technologies.\n-2+ years of experience using version control, especially Git.\n-Familiarity with Linux, AWS, Redshift.\n-Deep learning experience preferred.\n-Work experience with REST APIs, deploying microservices, and Docker is a plus.\n\nWhat you can expect\n\n-Employ appropriate methods to develop performant machine learning models at scale, owning them from inception to business impact.\n-Plan, engineer, and deploy both batch-processed and real-time data science solutions to increase user engagement with Doximity’s products.\n-Collaborate cross-functionally with data engineers and software engineers to architect and implement infrastructure in support of Doximity’s data science platform.\n-Improve the accuracy, runtime, scalability and reliability of machine intelligence systems\n-Think creatively and outside of the box. The ability to formulate, implement, and test your ideas quickly is crucial.\n\nTechnical Stack\n\n-We historically favor Python and MySQL (SQLAlchemy), but leverage other tools when appropriate for the job at hand.\n-Machine learning (linear/logistic regression, ensemble models, boosted models, deep learning models, clustering, NLP, text categorization, user modeling, collaborative filtering, topic modeling, etc) via industry-standard packages (sklearn, Keras, NLTK, Spark ML/MLlib, GraphX/GraphFrames, NetworkX, gensim).\n-A dedicated cluster is maintained to run Apache Spark for computationally intensive tasks.\n-Storage solutions: Percona, Redshift, S3, HDFS, Hive, Neo4j, and Elasticsearch.\n-Computational resources: EC2, Spark.\n-Workflow management: Airflow.\n\nFun facts about the Data Science team\n\n-We have one of the richest healthcare datasets in the world.\n-We build code that addresses user needs, solves business problems, and streamlines internal processes.\n-The members of our team bring a diverse set of technical and cultural backgrounds.\n-Business decisions at Doximity are driven by our data, analyses, and insights.\n-Hundreds of thousands of healthcare professionals will utilize the products you build.\n-A couple times a year we run a co-op where you can pick a few people you'd like to work with and drive a specific company goal.\n-We like to have fun - company outings, team lunches, and happy hours!


See more jobs at Doximity

Visit Doximity's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Virtual Pricing Director


closed

Back End Engineer Node.js Python Go Rust Workflow Management Data Science


Virtual Pricing Director


backend

 

javascript

 

backend

 

javascript

 

node js

This job post is closed and the position is probably filled. Please do not apply.
\nVirtual Pricing Director is hiring a Back End Engineer - working in Node.js and other languages - to architect and implement its new Legal Tech back end and data platform.\n\nOur software empowers a wide audience within law firms to swiftly produce consistently high quality data-driven pricing proposals that profitably deliver on clients' expectations of pricing transparency and pricing certainty. We achieve this with a best in class data management and workflow platform, and a compelling focus on UI/UX.\n\nWe have a proven MVP, the best-recognised brand and strong demand. Now embarking on our third product release, you will design and build out an entirely new data structure and back end. You will be establishing data pipelines, representing data for BI and reporting, and building comprehensive back end capabilities. Your work will go on to manifest new workflow management capabilities, and to bring structure to unstructured data.\n\nThe domain entails data-intensive services where security, data integrity and uptime are key. This presents lots of interesting challenges as we build and integrate our technology. We offer considerable freedom in technology choice and approach. There are big plans and lots to accomplish.\n\nUpcoming projects\n\n\n* Exploring, innovating and creating IP\n\n* Designing and implementing new data structures and logic\n\n* New back end for modular BI and reporting products\n\n* New API and data integration pipelines\n\n* Designing operational AWS infrastructure\n\n* Automating quality, CI/CD, and shaping a DevOps culture\n\n* Supporting dialogue with customers\n\n* Exploring GraphQL\n\n* Upholding ISO27001\n\n* Preparing for explorations in machine learning\n\n\n\n\nWe're looking for\n\n\n* Someone ready to shape the back end and data solution\n\n* Strong grounding in Computer Science, Data Science or Mathematics, through formal study - or equivalent knowledge\n\n* Deep, technical, software design and coding skills - accrued in a modern web back end context\n\n* Technology agnostic and adept with Node.js and strong SQL\n\n* Ability to build modern microservice-based systems that scale\n\n* Ability to unpack complex requirements, to uphold security of sensitive data and to conform to best practices\n\n* A collaborative, adaptable, user-centered approach\n\n* You may also bring - or like to gain - interests around knowledge management, data science, NLP or machine learning\n\n* Someone considering remote, Senior or Lead level, back end jobs such as: Back End Engineer | Back End Developer | Microservices Developer | Lead Software Engineer | Node.js Developer | Node.js Engineer etc.\n\n\n\n\nAnticipated ecosystem - we'll welcome your influence\n\nNode.js | optionally some Python, Go, Rust or similar | PostgreSQL | GraphQL | AWS | ML | TDD | Agile\n\nSalary and benefits\n\n\n* £60,000 - £90,000+ we're keeping an open mind\n\n* Share scheme for founding team members (once proven value to the business)\n\n* 25 days holiday, plus public holidays and a day for your birthday\n\n* Family-friendly and flexible culture - tell us what you need\n\n* Personal development plan that you can shape, with budget for related training/certifications\n\n\n


See more jobs at Virtual Pricing Director

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Clevertech


closed

Python Data Science


Clevertech


This job post is closed and the position is probably filled. Please do not apply.
Python/Data Science developer required to join a team project which uses social network data at scale. \n\nPlease note that this is a senior-level role, we are currently only considering candidates that meet the requirements below.\n\n\n\n\n\nQualifications:\n\n\n\n\n\n\n* 5+ years experience in a senior developer or architect role; ideally, you have delivered business-critical software to large enterprises\n\n* This role requires a statistical background with experience working with large databases\n\n* Experience working with Python packages such as PySpark and Pandas\n\n* Experience with AWS data-oriented products is required: AWS Glue, Athena, DynamoDB, S3, Lambda Functions\n\n* Experience with Databricks as part of an AWS Pipeline with Lambdas\n\n* Advanced knowledge of interfacing to Facebook, Instagram, Twitter, and Youtube social network APIs\n\n* Experience with NodeJS and ReactJS based web applications\n\n* Experience with Jenkins and Groovy-based imperative pipelines to orchestrate multiple processes during deployments\n\n* Experience and knowledge of advanced GraphQL topics such as schema stitching and federation\n\n\n\n\n\nWhat you’ll do:\n\n\n\n\n\n* Gather influencer statistics, social engagement statistics for sentiment analysis \n\n* Collaborate in every stage of a product's lifecycle; from planning to delivery\n\n* Create clean, modern, testable, well-documented code\n\n* Communicate daily with clients to understand and deliver technical requirements\n\n\n\n\nHow We Work\n\n\nWhy do people join Clevertech? To make an impact. To grow themselves. To be surrounded by developers who they can learn from. We are truly excited to be creating waves in an industry under transformation.\n\n\n\n\nTrue innovation comes from an exchange of knowledge across all of our teams. To put people on the path for success, we nurture a culture built on trust, collaboration, and personal growth. You will work in small feature-based cross-functional teams and be empowered to take ownership.\n\n\n\n\nWe make a point of constantly evolving our experience and skills. We value diverse perspectives and fostering personal growth by challenging everyone to push beyond our comfort level and try something new.\n\n\n\n\nThe result? We produce meaningful work\n\n\n\n\n\nWant to learn more about Clevertech and the team? Check out clevertech.careers and our recent video highlighting an actual Clevertech Sr Developer's Story


See more jobs at Clevertech

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Simply Business


closed

Senior Ruby Python Java Developers


Simply Business


qa

 

javascript

 

infosec

 

qa

 

javascript

 

infosec

 

cloud

This job post is closed and the position is probably filled. Please do not apply.
\nIf you’re smart, passionate about technology and enjoy solving complex technical challenges then you should apply to join our best-in-class tech team. We believe that people are our most important asset and one worth protecting. As such, we’re known for creating an enviable working culture to keep our employees smiling on a day to day basis. \n\nWe offer things such as flexible hours, remote working, fortnigtly hackathons and freedom to work on projects of individual interest to name just a few. In fact, our impressive working culture has recently earned us 1st place in the Sunday Times 100 Best Companies to Work for 2015. \n\n\nWe have two different positions.\n\n\n1. Senior Developers who would like to lead a team / projects - We envisage the split here being about 70% hands-on coding and 30% leading the team, liaising with product owners on prioritization work, attending iteration planning meetings with stakeholders, delegating tasks to dev team, and making sure we are following Lean / Agile processes correctly.\n\n\n2. Senior Developers who want to be 100% coders (non-lead) - This role will be 100% hands on working with our other Devs to code beautiful products for our customers! \n\n\n\nSome of our Current Projects: \n\nSeedy - Our custom built CMS. A hybrid, markdown based, sinatra driven CMS but with a TDD undercurrent. Content changes are no different to backend refactoring. Templates are written in a custom DSL that enforces our style guide while still supporting iterative changes and automatic deployment. Across the company people (including business teams) are now being tagged in pull requests and even writing code!\n\nAerie - We’re overhauling our data and analytics architecture. Using the latest approaches in event streaming, we are building a brand new data pipeline based on Kinesis and Redshift that will support the analysis of both structured and semi-structured data in near real-time. This will provide rapid and intuitive customer insight to our decision makers, while also providing exciting machine learning capabilities back to our core product offering.\n\nAAA - We are enhancing our new world platform. Our current focus is on security with a view to building out ‘infrastructure as code’ using tools such as AWS CloudFormation, AWS OpsWorks, Chef/Puppet and many others. Essentially, we believe a well architected system allows us to avoid unnecessary complexity and can enable individuals to generate great output.\n\nOur Tech Stack\n\n\n\n\n* Ruby\n\n* Rails                                                                                                                            \n\n* Scala  \n\n* MongoDB\n\n* RabbitMQ\n\n* Puppet/Boxen\n\n* AWS                                                                                                                        \n\n* Hadoop / RedShift / Looker /Tableau\n\n* Cucumber / RSpec / Jasmine                              \n\n* SASS / HAML / HTML5 / CSS3 / JavaScript / CoffeeScript / jQuery\n\n\n\n\n\nThis list changes all the time, as being on the lookout for new technology trials and opportunities is part of our code. We don’t believe in limiting ourselves, or the company. If there are other tools that could get the job done better, we’re committed to exploring them.\n\nThings we believe in:\n\n\n\n\n* TDD / BDD\n\n* Continuous Delivery\n\n* Pair Programming\n\n* Build – Measure - Learn\n\n* Active participation to open source\n\n* Our business people writing code (we've not seen this anywhere else!)                                \n\n* Cross functional teams (Dev / QA/ Data / UX / Product owners all sitting together)            \n\n* Release early and learn quickly - we release our software to production around 15 times a day on average!        \n\n* Continuous learning - we look to pay for courses and conferences and actively encourage our Devs to get out and about in the community. We have even sent some of our Devs to Miami, Portugal and Barcelona for conferences recently.                                                              \n\n* Have fun! We do ruby coding challenges during lunchtimes over pizza, we have desk beers on a friday, we have several social events throughout the year, we do fortnightly hackathons, we have afterwork clubs like: UX, Data science and Robotics club, we have flexible working hours, weekly 'show n tell' sessions, and lot's more!                                                                                                 \n\n\n


See more jobs at Simply Business

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

EcoHealth Alliance


verified closed

Software Developer


EcoHealth Alliance


javascript

 

meteor js

 

javascript

 

meteor js

 

mobile

This job post is closed and the position is probably filled. Please do not apply.
\nAre you interested in applying your technical skills to identifying the next global infectious disease threat and improving global health? \n\nEcoHealth Alliance is seeking software developers for our tech team. As part of a small team of developers, data scientists, and infectious disease scientists, you’ll develop tools to support EcoHealth Alliance’s mission in conservation and global public health. The focus of this position will be to develop global health web platforms that are actively used to monitor global emerging infectious diseases, and that contribute to the analysis of disease emergence.\n\nDescription and Responsibilities \n\nWe are looking to build a diverse team with a variety of skill sets and backgrounds. Position duties may include:\n\n\n* Contributing to open source software; mobile app development, building front-end(s) for web frameworks (e.g., Meteor)Data and text mining\n\n* Designing scientific visualizations\n\n* Geo-tagging and geo-visualization\n\n* Network modeling\n\n* Contributing to technical project management\n\n* Some collaborative proposal development.\n\n* Developing natural language processing and machine learning systems\n\n\n\n\nWe mostly code in CoffeeScript, JavaScript, and Python and use Meteor for web apps and AWS for hosting. We collaborate on GitHub, do code review via pull requests, and use project management tools and communication to keep everything on track.\n\nEcoHealth Alliance is an equal opportunity employer offering competitive salary and a comprehensive benefits package that covers 100% of the monthly health care premium costs for the employee and their family when applicable (including dental and vision coverage), a 403(b) pension plan, flexible work schedules, a minimum of 10 days paid vacation annually, paid holidays, work from home days, and a pre-tax transportation withholding for commuters in New York City. EcoHealth Alliance may encourage employees to attend conferences when appropriate, and enroll in courses for the joint purposes of professional development and promoting the EcoHealth Alliance mission.\n\nThe position is based at EHA headquarters in New York City (Manhattan). Local candidates are preferred, with possible relocation assistance available. Outstanding remote candidates will also be considered, but must be available to work for at least 4 hours between 9am and 6pm Eastern Standard Time to facilitate collaboration with New York based employees. \n\nHow To Apply\n\nTo apply, you must send your resume and a personalized cover letter as one document to [email protected] with the subject of 'Software Developer 2015'. Applications without cover letters will not be evaluated. This cover letter will help us assess your ability to communicate effectively and will be used as the primary mechanism to determine whether a candidate is asked to interview for the vacant position.  The cover letter is the most important part of your application. In your cover letter, please briefly describe your background and any experience you may have with web development frameworks, machine learning, natural language processing and public health. Be sure to mention why you are interested in working  on a research-and-development oriented team at a health and conservation focused non-profit.


See more jobs at EcoHealth Alliance

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

IP-Echelon


closed

Data Analyst Developer


IP-Echelon


perl

 

perl

 

ops

This job post is closed and the position is probably filled. Please do not apply.
\nIP-Echelon helps major rights holders solve complex problems related to unauthorized content distribution. Our people are the key to our success - clever, personable, fun, and hard-working.  We are seeking a jr. data analyst / developer for our Los Angeles office to help in our day-to-day operations.  The role is perfect for a recent graduate in computing (especially computer science) who is eager to get hands on with some awesome technologies and learn valuable new skills.\n\nFor the right candidate, there would be some flexibility in hours and the ability to telecommute.\n\nThe position responsibilities would include:\n\n\n* Data analysis and research. Presentation of findings to both clients and upper management.\n\n* Querying data using SQL, Tableau, advanced excel features as well as in-house tools\n\n* Manipulating and modeling of data using a language such as Python, Perl or R  \n\n* Some programming in Python.  If you have good skills in another language you should be able to learn on the job  \n\n* Support of internal staff and clients across a variety of projects\n\n* Proofreading and document formatting\n\n* Managing relationships with some data vendors / contractors\n\n* Inspection and classification of movie/television content\n\n* Miscellaneous tasks such as arranging schedules, meetings, and travel for key staff\n\n\n


See more jobs at IP-Echelon

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Knot 6-3


closed
San Francisco

Data Scientist


Knot 6-3

San Francisco

excel

 

excel

 

sales

This job post is closed and the position is probably filled. Please do not apply.
Where others see a 8 GB SQL dump, you see a thousand stories waiting to be uncovered. You're characterized by an overdose of curiosity with a healthy serving of skepticism. You can run an analytics campaign to optimize a sales funnel or build a web-scarping, NL processing recommendation.\n\nYou know the tools of the trade and know which one fits the problem — Hadoop and Excel can co-exist peacefully. You're used to being the smartest kid in the room. You don't care if anybody else know that. Your work speaks for itself. You're our next data scientist.\n \n\n#Salary and compensation\n$90,000 — $120,000/year\n \n\n#Equity\n0.5 - 2.0\n\n\n#Location\nSan Francisco


See more jobs at Knot 6-3

Visit Knot 6-3's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Khan Academy


closed

Data Scientist


Khan Academy


javascript

 

edu

 

teaching

 

javascript

 

edu

 

teaching

 
This job post is closed and the position is probably filled. Please do not apply.
\nKhan Academy is looking for talented data scientists to create a free virtual classroom for the world. Our small team is changing the face of education - join us on our mission to provide a free, world-class education for anyone, anywhere.\n\nWe’re fun, quirky people at heart that come from a variety of backgrounds. Our team includes industry leaders from Google, Apple, Facebook, Microsoft, Pixar, Fog Creek, and tiny startups, as well as people with non-traditional backgrounds who didn’t graduate from college. Together, we’re a small but elite team, deeply invested in your future. We believe that no organization will be as invested in developing you as a professional.\n\nThe Data Science team works alongside developers, product managers and designers, applying analytical tools and statistical methods to our amazing dataset to improve the learning experience for our users. Data scientists are responsible for querying, cleaning and presenting data as well as running experiments and answering open questions with detailed analysis or investigation of the data. We enjoy the challenge of tackling tricky questions and effectively communicating (sometimes ambiguous) results with the team. Some example inquiries:\n\n\n* How long does it take a typical user to complete a practice task on the learning dashboard? What factors correlate with higher rates of task completion?\n\n* Which videos/exercises on the site are correlated with increased engagement/learning and why?\n\n* Are there specific interventions (e.g. 'Growth Mindset' messages or moral support) that can increase achievement on the site?\n\n* Can we infer a user's intended goal on the site within their first few clicks, and subsequently measure how well we met that goal?\n\n\n\n\nWE USE\n\n\n* Python, R, Amazon AWS, App Engine/BigQuery, JavaScript, and anything else that best solves the problem at hand\n\n* We're not religious about using a specific technology. We're religious about providing an incredible experience for Khan Academy learners\n\n* Cutting edge research from the psychology, education, and cognitive science to to guide our experiments and form our hypotheses\n\n\n\n\nYOU NEED\n\n\n* To be completely comfortable with both the statistical analysis of large data sets and any scripting languages or statistical packages that may require\n\n* Basic knowledge of algorithms in any programming language. Python is preferred but not required\n\n* Passion for whatever techniques and technology you've used in the past\n\n* Solid command of basic statistics and a strong quantitative aptitude. Machine learning and advanced stats not required, but it sure doesn't hurt\n\n* Pragmatism and demonstrated skill at turning data into action\n\n* A passion for data and desire to change the world\n\n* US, Canadian, Australian, or Mexican citizenship, US Residency, or other authorization to work in the US\n\n\n\n\nWE OFFER THE FOLLOWING BENEFITS\n\nWe may be a non-profit, but we reward our talented team well!\n\n\n* Highly competitive salaries and annual bonuses\n\n* Ample paid time off as needed – we are about getting things done, not face time\n\n* Delicious catered lunch daily plus tons of snacks and beverages\n\n* The opportunity to work on high-impact software and programs that are already defining the future of education\n\n* Great location: walking distance to Caltrain and downtown Mountain View\n\n* Awesome team events and weekly board game nights\n\n* The ability to improve real lives\n\n* A fun, high-caliber team that trusts you and gives you the freedom to be brilliant – we treat new hires more like co-founders\n\n* Oh, and we offer all those other typical benefits as well: 401(k) + 4% matching & comprehensive insurance including medical, dental, vision, and life\n\n\n\n\nHOW TO APPLY\n\nAlong with your resume, please include:\n\n1) Links to any projects (we really like these). We especially like living, breathing projects; please do not send your code\n\n2) A cover letter that includes why you want to join Khan Academy\n\nLEARN MORE\n\n\n* KA Data Science blog: http://data.khanacademy.org/\n\n* Sal’s TED talk: http://www.ted.com/talks/salman_khan_let_s_use_video_to_reinvent_education.html\n\n* Our team: http://www.khanacademy.org/about/the-team\n\n* You Can Learn Anything: https://www.khanacademy.org/youcanlearnanything\n\n* Sal’s Interview with BBC: http://www.bbc.co.uk/programmes/b04hytg6\n\n\n\n\nWe are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or Veteran status.


See more jobs at Khan Academy

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

IdeaMarket


closed
San Francisco

Data Scientist


IdeaMarket

San Francisco

matlab

 

matlab

 
This job post is closed and the position is probably filled. Please do not apply.
\n\n#Salary and compensation\n$75,000 — $110,000/year\n \n\n#Equity\n0.1 - 1.0\n\n\n#Location\nSan Francisco


See more jobs at IdeaMarket

Visit IdeaMarket's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Vision Fleet


closed
Venice

Data Scientist


Vision Fleet

Venice

backend

 

backend

This job post is closed and the position is probably filled. Please do not apply.
We are looking for a dynamic Data Scientist to help develop the core IP that will power our analytics platform and take iQ to the next level. This is critical role, so we need someone who doesn’t just process statistics, but who looks at data from\nmany angles, testing and discovering new insights independently.\n\n\nWHAT YOU'LL DO:\nLocated in LA or SF, you will report to the Head of Analytics, working closely with our entire VF team, external consultants and vendors. Specifically, we’re looking for you to do the following:\n1. Conduct sophisticated analyses of, and develop models for, energy usage, routing, utilization, car-sharing, and deployment around vehicle fleets;\n2. Develop robust algorithms that extract and process data, from multiple sources, and cluster the results in our VFiQ backend (SQL/NoSQL);\n3. Design experiments and identify novel approaches for improving vehicle, fleet, and driver productivity that utilize nontraditional data sources;\n4. Assist/create the telematics (on-board) data collection logic and post-process scripts to monitor and quantify performance of individual vehicles and fleets;\n5. Develop data tools to assist management in real-life business decision making;\n6. Effectively communicate analytics results to VF management and customers using compelling visualizations and presentations.\n\nThis role is for someone proactive and inquisitive who can read data, spot trends, make predictions and present results succinctly. \n\n#Salary and compensation\n$80,000 — $120,000/year\n \n\n#Equity\n0.1 - 0.2\n\n\n#Location\nVenice


See more jobs at Vision Fleet

Visit Vision Fleet's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Snapwiz


closed
Bangalore

Data Scientist


Snapwiz

Bangalore

edu

 

teaching

 

java

 

edu

 

teaching

 

java

 

c

This job post is closed and the position is probably filled. Please do not apply.
Every month Snapwiz collects millions of user interaction records on its online learning platform (Wiley Learning Spaces, Wiley Orion, McGraw Hill Easy, Edulastic, etc). We use statistical analysis and machine learning techniques to administer adaptive testing, build personalized learning paths, and deliver learning recommendations. \n\nAs a data scientist, you’ll work with product managers, designers, and engineers to build data driven features directly into our learning platform. \n\nIf you enjoy working with data to build models, and solve hard challenges, then this job is for you. \n\n\nRequirements:\n* Strong background in Machine Learning, Statistics,and Data Analysis\n* Ability to set and meet your own project objectives & milestones \n* Ability to manage and mentor a team of junior data scientists members \n* Experience in exploratory data analysis, modeling & visualization tools preferably R   and Shiny or Python/Pandas/Scipy/Numpy.\n* Critical thinking and creativity skills. We are looking for individuals who can think of novel ways to analyze data and solve problems \n* Communicate results and progress internally and externally in meetings, presentations, and tech talks\n* Master's (or advanced) degree or equivalent experience in a quantitative field \n* Experience programming in an object oriented language (Java, C++, etc) is a plus\n* Experience in handling and mining large data sets, parallelization and run time   optimization is a plus\n* Prior experience with data mining in education field is a plus \n\n#Salary and compensation\n$1,500,000/year\n \n\n#Equity\n0.15 - 0.75\n\n\n#Location\nBangalore


See more jobs at Snapwiz

Visit Snapwiz's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

CollegeBol


closed
Ahmedabad

Data Scientist


CollegeBol

Ahmedabad

java

 

c

 

c plus plus

 

java

 

c

 

c plus plus

 
This job post is closed and the position is probably filled. Please do not apply.
Requirements\n\n• Strong background in Machine Learning, Statistics, Information Retrieval, or Graph Analysis\n• Some experience working with large datasets, preferably using tools like Hadoop, MapReduce, Pig, or Hive\n• 3+ years experience in developing high quality software, contributions to open source projects are a plus\n• Experience programming in an object oriented language (Python, Java, C#, C++, etc)\n• Knowledge of scripting languages like Python, familiarity with web frameworks a plus (Django)\n• Ability to track down complex data and engineering issues, evaluate different algorithmic approaches, and analyze data to solve problems\n• Able to learn new data driven products, features, and technologies\n• Ability to plan, set and meet your own project objectives & milestones. \n• Ability to coordinate effectively with team members in engineering, design, and product management \n\n#Salary and compensation\n$150,000 — $210,000/year\n \n\n#Equity\n0.25 - 1.0\n\n\n#Location\nAhmedabad


See more jobs at CollegeBol

Visit CollegeBol's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Pexeso


closed
San Francisco

Data Scientist


Pexeso

San Francisco

c plus plus

 

c plus plus

 

c

This job post is closed and the position is probably filled. Please do not apply.
- You are an intellectually curious\n- You like to find needle in a haystack and don't stop until you find it\n- You are familiar with tools like Open/RedShift, BigQuery, ...\n- You can write your own tools in Python (Scipy, Pandas), R, C++, ...\n- You like to optimize your own code so you don't waste unnecessary resources\n- You are not closed minded to a new technology but you don't want to deploy anything you read about on HN \n\n#Salary and compensation\n$70,000 — $110,000/year\n \n\n#Equity\n0.1 - 1.5\n\n\n#Location\nSan Francisco


See more jobs at Pexeso

Visit Pexeso's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Portent.IO


closed
Los Angeles

Data Scientist


Portent.IO

Los Angeles

javascript

 

java

 

math

 

javascript

 

java

 

math

 

php

This job post is closed and the position is probably filled. Please do not apply.
About us: We're a funded team working on a really exciting predictive analytics platform for the film industry. In short we're using big-data to help predict how much a movie will make before it's even been made (check out our site for more info). We've also secured partnerships with some very large market research companies meaning that we have a completely unique and vast data set to play with.\n\nWhat we're looking for: A data scientist or an experienced coder (who could pick up data science skills) with a hackers mentality to join us full time who is interested in the film business and who will also be happy to travel to LA occasionally. We're not interested in how old you are, what your qualifications are, or where you're from; we're only interested in you being able to do a damn good job and being passionate about film!\n\nYou will need to be knowledgable in the following:\n\n- R or Python (in particular it's big data tool sets) or Mathematica\n- PHP\n- Experience with multiple different types of database.\n\nNice to have skills but not essential:\n\n- JavaScript/HTML5\n- Experience writing scraper tools\n- Experience doing twitter sentiment analysis \n\n#Salary and compensation\n$20,000 — $50,000/year\n \n\n#Equity\n0.2 - 2.0\n\n\n#Location\nLos Angeles


See more jobs at Portent.IO

Visit Portent.IO's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Wunwun


closed
New York City

Data Scientist


Wunwun

New York City

matlab

 

matlab

 
This job post is closed and the position is probably filled. Please do not apply.
\n\n#Salary and compensation\n$60,000 — $120,000/year\n \n\n#Equity\n0.0 - 0.5\n\n\n#Location\nNew York City


See more jobs at Wunwun

Visit Wunwun's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Adchemix


closed
Boston

Data Scientist


Adchemix

Boston

marketing

 

marketing

 

non tech

This job post is closed and the position is probably filled. Please do not apply.
You are really smart. You probably have a PhD. And all you want to do is find the magic that lives inside of data sets. We are building a wonderful set of relationships amongst purchase intent, acquisition costs and customer lifetime value. We need your brains and computing brawn to help us deepen our understanding and uncover wonderful insights for our retailer partners. You know how to wield serious software and dig into massive data sets. You should have a keen understanding of marketing & retail data. \n\n#Salary and compensation\n$85,000 — $120,000/year\n \n\n#Equity\n0.1 - 1.0\n\n\n#Location\nBoston


See more jobs at Adchemix

Visit Adchemix's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
320ms