FeedbackIf you find a bug, or have feedback, put it here. Please no job applications in here, click Apply on the job instead.Thanks for the message! We will get back to you soon.

[Spam check] What is the name of Elon Musk's company going to Mars?

Send feedback
Open Startup
RSS
API
Health InsurancePost a job

find a remote job
work from anywhere

πŸ‘‰ Hiring remotely? Reach 1,000,000+ remote workers on the πŸ† #1 Remote Jobs board

Post a job
Hide this

Remote Health by SafetyWing


Global health insurance for freelancers & remote workers

Strong Analytics


closed

data science

This job post is closed and the position is probably filled. Please do not apply.
\nStrong Analytics is seeking a data scientist to join our team in developing machine learning pipelines, building  statistical models, and generally helping our clients discover value in their data.\n\nAt Strong, we pride ourselves not only in building the right solutions for our clients through research and development, but in implementing and scaling up those solutions through strong engineering. This role thus requires a deep expertise in applying statistics and machine learning to real-world problems where data must be gathered, transformed, cleaned, and integrated into some larger architecture.\n\nWe offer a comprehensive compensation package, including:\n\n\n* Competitive salary\n\n* Profit sharing or equity, based on experience\n\n* Health, dental, vision, and life insurance\n\n* Four weeks paid vacation\n\n* 401k with employer matching\n\n\n\n\nRequirements\n\nCandidates will be evaluated based on their experience in the following areas (though no one is expected to be an expert in each of these):\n\n\n* Statistical modeling and hypothesis testing\n\n* Applying machine learning to real-world problems\n\n* Writing clean SQL and ETL pipelines\n\n* Building Python applications\n\n* Building deep neural networks with modern tools, such as PyTorch or Tensorflow\n\n* Integrating with various RDBMS (e.g., Postgres, MySQL) and distributed data stores (e.g., Hadoop)\n\n* Deploying applications into cloud-based infrastructures (e.g., AWS)\n\n* Creating and interacting with RESTful APIs\n\n* Managing *nix servers\n\n* Writing unit tests\n\n* Collaborating via Git\n\n\n\n\nApplicants with a PhD in a quantitative field are preferred; however, all applicants will be considered based on their experience and demonstrated skill/aptitude.\n\nApplicants should have the ability to travel infrequently (<5% of your time) for team meetings, conferences, and occasional client site visits.


See more jobs at Strong Analytics

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Strong Analytics


closed

engineer

This job post is closed and the position is probably filled. Please do not apply.
\nStrong Analytics is seeking an application and data engineer to collaborate with our team on developing machine learning and Extract-Transform-Load (ETL) data pipelines, building RESTful APIs to trained statistical models, and deploying real-time streaming applications for continuous data processing.\n\nThis role is focused on applying and embedding machine learning solutions crafted by our team of PhD-trained data scientists. As such, it does not require deep expertise in machine learning or statistics, but rather a broad expertise in ingesting, storing, and analyzing data at scale.\n\nTo be eligible for this role, it is expected that you have experience:\n\n\n* Building and deploying Python applications to cloud-based infrastructures (e.g., AWS).\n\n* Writing complex SQL queries for a variety of data stores (e.g., Postgres, MySQL, Redshift).\n\n* Deploying (not necessarily designing/testing) machine learning pipelines and statistical models using one or more libraries, such as Tensorflow, Keras, PyTorch, scikit-learn, or XGBoost.\n\n* Designing distributed applications that use workers and queues to manage asynchronous tasks.\n\n* Designing and building RESTful APIs.\n\n* Managing *nix servers via the terminal.\n\n* Managing and monitoring scheduled/batch jobs.\n\n* Writing unit tests.\n\n* Using continuous integration.\n\n* Collaborating via Git.\n\n\n\n\nIdeally, you would also have experience (or an aptitude to learn about):\n\n\n* Building Apache Spark applications to analyze and learn from data at scale.\n\n* Building and deploying streaming applications (e.g., Spark Streaming, Akka Streams).\n\n* Deploying applications powered by Hadoop and/or other services provided by AWS EMR.\n\n\n\n\nYou do not need a PhD (or any formal education, for that matter) for this role. Applicants will be evaluated first-and-foremost on their experience and skill in the areas described above.\n\nYou should have the ability to travel infrequently (<5% of your time) for team meetups and conferences.\n\nRemote or On-Site\n\nThis position offers 100% remote work for candidates outside the Chicago area with exceptional communication and remote collaboration skills. Candidates in the Chicago area who can work in-person with our team are preferred.\n\nWhat We Offer\n\n\n* Competitive salary\n\n* Profit sharing or equity, based on experience\n\n* Health insurance\n\n* Generous vacation policy\n\n* Flexible work-from-home policy\n\n* Company-issued MacBook Pro's\n\n\n


See more jobs at Strong Analytics

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
137ms