Remote Engineer + Big Data Jobs in Sep 2019 ๐Ÿ“ˆ Open Startup
RSS
API
Post a Job

get a remote job
you can do anywhere

55 Remote Engineer Big Data Jobs at companies like Pixalate, Iqvia The Human Data Science Company and Ultra Tendency last posted 1 year ago

Get a  email of all new remote Engineer + Big Data jobs

Subscribe
×

  Jobs

  People

๐Ÿ‘‰ Hiring for a remote Engineer + Big Data position?

Post a Job - $299
on the ๐Ÿ† #1 remote jobs board

This year

Pixalate


Big Data Engineer

Big Data Engineer


Pixalate


big data

engineer

big data

engineer

1yr
\nWho are we?\n\n\nPixalate helps Digital Advertising ecosystem become a safer and more trustworthy place to transact in, by providing intelligence on "bad actors" using our world class data. Our products provide benchmarks, analytics, research and threat intelligence solutions to the global media industry. We make this happen by processing terabytes of data and trillions of data points a day across desktop, mobile, tablets, connected-tv that are generated using Machine Learning and Artificial Intelligence based models.\n\n\nWe are the World's #1 decision making platform for Digital Advertising. And don't just take our word for it -- Forrester Research consistently depends on our monthly indexes to make industry predictions.\n\n\n\n\nWhat does the media have to say about us?\n\n\n\n*  Harvard Business Review\n\n* Forbes\n\n* NBC News \n\n* CNBC\n\n* Business Insider\n\n* AdAge\n\n* AdAge\n\n* CSO Online\n\n* Mediapost\n\n* Mediapost\n\n* The Drum\n\n* Mediapost\n\n* Mediapost\n\n\n\n\n\nHow is it working at Pixalate?\n\n\nWe believe in Small teams that produce high output\n\n\nSlack is a way of life, short emails are encouraged\n\n\nFearless attitude holds high esteem\n\n\nBold ideas are worshipped\n\n\nChess players do really well\n\n\nTitles don't mean much, you attain respect by producing results\n\n\nEveryone's a data addict and an analytical thinker (you won't survive if you run away from details)\n\n\nCollaboration, collaboration, collaboration\n\n\nWhat will you do?\n\n\nSupport existing processes running in production\n\n\nDesign, develop, and support of various big data solutions at scale (hundreds of Billions of transactions a day)\n\n\nFind smart, fault tolerant, self-healing, cost efficient solutions to extremely hard data problems\n\n\nTake ownership of the various big data solutions, troubleshoot issues, and provide production support\n\n\nConduct research on new technologies that can improve current processes\n\n\nContribute to publications of case studies and white papers delivering cutting edge research in the ad fraud, security and measurement space\n\n\nWhat are the minimum requirements for this role?\n\n\nBachelors, Masters or Phd in Computer Science, Computer Engineering, Software Engineering, or other related technical field.\n\n\nA minimum of 3 years of experience in a software or data engineering role\n\n\nExcellent teamwork and communication skills\n\n\nExtremely analytical, critical thinking, and problem solving abilities\n\n\nProficiency in Java\n\n\nVery strong knowledge of SQL and ability to implement advanced queries to extract information from very large datasets\n\n\nExperience in working with very large datasets using big data technologies such as Spark, BigQuery, Hive, Hadoop, Redshift, etc\n\n\nAbility to design, develop and deploy end-to-end data pipelines that meet business requirements.\n\n\nStrong experience in AWS and Google Cloud platforms is a big plus\n\n\nDeep understanding of computer science concepts such as data structures, algorithms, and algorithmic complexity\n\n\nDeep understanding of statistics and machine learning algorithms foundations is a huge plus\n\n\nExperience with Machine Learning big data technologies such as R, Spark ML, H2O, Mahout etc is a plus\n\n\nWhat do we have to offer?\n\n\nLocated in sunny Palo Alto and Playa Vista, CA the core of Pixalate's DNA lies in innovation. We focus on doing things differently and we challenge each other to be the best we can be. We offer:\n\n\nExperienced leadership and founding team\n\n\nCasual environment (as long as you wear clothes, we're good!)\n\n\nFlexible hours (yes, we mean it - you will never have to sit in traffic anymore!)\n\n\nFREE Lunches! (You name it, we've got it)\n\n\nFun team events\n\n\nHigh performing team who wants to win and have fun doing it\n\n\nExtremely Competitive Compensation\n\n\nOPPORTUNITY (Pixalate will be what you make it)

See more jobs at Pixalate

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

IQVIA The Human Data Science Company


Senior Engineer Big Data Spark Scala

Senior Engineer Big Data Spark Scala


IQVIA The Human Data Science Company


big data

scala

senior

engineer

big data

scala

senior

engineer

1yr
\nWe are looking for creative, intellectually curious and entrepreneurial Big Data Software Engineers to join our London-based team.\n\nThe team\n\nJoin a high-profile team to work on ground-breaking problems in health outcomes across disease areas including Ophthalmology, Oncology, Neurology, Chronic diseases such as diabetes, and a variety of very rare conditions. Work hand-in-hand with statisticians, epidemiologists and disease area experts across the wider global RWE Solutions team, leveraging a vast variety of anonymous patient-level information from sources such as electronic health records; The data encompasses IQVIA’s access to over 530 million anonymised patients as well as bespoke, custom partnerships with healthcare providers and payers. \n\nThe role\n\nAs part of a highly talented Engineering and Data Science team, write highly performant and scalable code that will run on top of our Big Data platform (Spark/Hive/Impala/Hadoop). Collaborate with Data Science & Machine Learning experts on the ETL process, including the cohort building efforts. \n\nWhat to expect:\n\n\n* Working in a cross-functional team – alongside talented Engineers and Data Scientists\n\n* Building scalable and high-performant code\n\n* Mentoring less experienced colleagues within the team\n\n* Implementing ETL and Feature Extractions pipelines\n\n* Monitoring cluster (Spark/Hadoop) performance\n\n* Working in an Agile Environment\n\n* Refactoring and moving our current libraries and scripts to Scala/Java\n\n* Enforcing coding standards and best practices\n\n* Working in a geographically dispersed team\n\n* Working in an environment with a significant number of unknowns – both technically and functionally.\n\n\n\n\nOur ideal candidate: Essential experience \n\n\n* BSc or MSc in Computer Science or related field\n\n* Strong analytical and problem solving skills with personal interest in subjects such as math/statistics, machine learning and AI.\n\n* Solid knowledge of data structures and algorithms\n\n* Proficient in Scala, Java and SQL\n\n* Strong experience with Apache Spark, Hive/Impala and HDFS\n\n* Comfortable in an Agile environment using Test Driven Development (TDD) and Continuous Integration (CI)\n\n* Experience refactoring code with scale and production in mind\n\n* Familiar with Python, Unix/Linux, Git, Jenkins, JUnit and ScalaTest\n\n* Experience with integration of data from multiple data sources\n\n* NoSQL databases, such as HBase, Cassandra, MongoDB\n\n* Experience with any of the following distributions of Hadoop - Cloudera/MapR/Hortonworks.\n\n\n\n\nBonus points for experience in: \n\n\n* Other functional Languages such as Haskell and Clojure\n\n* Big Data ML toolkits such as Mahout, SparkML and H2O\n\n* Apache Kafka, Apache Ignite and Druid\n\n* Container technologies such as Docker\n\n* Cloud Platforms technologies such as DCOS/Marathon/Apache Mesos, Kubernetes and Apache Brooklyn.\n\n\n\n\nThis is an exciting opportunity to be part of one of the world's leading Real World Evidence-based teams, working to help our clients answer specific questions globally, make more informed decisions and deliver results.\n\nOur team within the Real-World & Analytics Solutions (RWAS) Technology division is a fast growing group of collaborative, enthusiastic, and entrepreneurial individuals. In our never-ending quest for opportunities to harness the value of Real World Evidence (RWE), we are at the centre of IQVIA’s advances in areas such as machine learning and cutting-edge statistical approaches. Our efforts improve retrospective clinical studies, under-diagnosis of rare diseases, personalized treatment response profiles, disease progression predictions, and clinical decision-support tools.\n\nWe invite you to join IQVIA™.\n\nIQVIA is a strong advocate of diversity and inclusion in the workplace.  We believe that a work environment that embraces diversity will give us a competitive advantage in the global marketplace and enhance our success.  We believe that an inclusive and respectful workplace culture fosters a sense of belonging among our employees, builds a stronger team, and allows individual employees the opportunity to maximize their personal potential.

See more jobs at IQVIA The Human Data Science Company

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Ultra Tendency


Software Engineer Big Data

Software Engineer Big Data


Ultra Tendency


big data

dev

engineer

digital nomad

big data

dev

engineer

digital nomad

2yr
\nYour Responsibilities:\n\n\n\n\n* Deliver value to our clients in all phases of the project life cycle\n\n* Convert specifications to detailed instructions and logical steps followed by their implementation\n\n* Build program code, test and deploy to various environments (Cloudera, Hortonworks, etc.)\n\n* Enjoy being challenged and solve complex data problems on a daily basis\n\n* Be part of our newly formed team in Berlin and help driving its culture and work attitude\n\n\n\n\n\n\nJob Requirements\n\n\n\n\n* Strong experience developing software using Java or comparable languages (e.g., Scala)\n\n* Practical experience with data ingestion, analysis, integration, and design of Big Data applications using Apache open-source technologies\n\n* Strong background in developing on Linux\n\n* Proficiency with the Hadoop ecosystem and its tools\n\n* Solid computer science fundamentals (algorithms, data structures and programming skills in distributed systems)\n\n* Sound knowledge of SQL, relational concepts and RDBMS systems is a plus\n\n* Computer Science (or equivalent degree) preferred or comparable years of experience\n\n* Being able to work in an English-speaking, international environment \n\n\n\n\n\n\nWe offer:\n\n\n\n\n* Fascinating tasks and interesting Big Data projects in various industries\n\n* Benefit from 10 years of delivering excellence to our customers\n\n* Work with our open-source Big Data gurus, such as our Apache HBase committer and Release Manager\n\n* Work on the open-source community and become a contributor\n\n* Learn from open-source enthusiasts which you will find nowhere else in Germany!\n\n* Fair pay and bonuses\n\n* Enjoy our additional benefits such as a free BVG ticket and fresh fruits in the office\n\n* Possibility to work remotely or in one of our development labs throughout Europe\n\n* Work with cutting edge equipment and tools\n\n\n\n\n

See more jobs at Ultra Tendency

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Anchormen


Senior Big Data Engineer

Senior Big Data Engineer


Anchormen


big data

senior

engineer

big data

senior

engineer

2yr
\nOverview\n\n\nAnchormen is growing rapidly! Therefore, we are looking for additional experienced Big Data Engineers to serve our customer base at a desired level. This entails giving advise, building and maintaining Big Data platforms and employing data science solutions/models in enterprise environments.\n\nWe build and deliver data driven solutions that do not depend on one specific tool or technology. As independent consultant and engineer, your knowledge and experience will be a major contribution to our colleagues and customers. A diverse and challenging position, where technology is paramount. Are you joining our team?\n\n\nResponsibilities\n\n\n\n* You will be working on 1 to 3 different projects at any given time.\n\n* On average you will work 50% of the time at the Anchormen office and 50% at the client's location.\n\n* You work closely together with business to achieve data excellence.\n\n* You have a pro-active attitude towards the needs of the client.\n\n* You will be building test-driven software.\n\n* You gather data from external API’s and internal sources and add value to the data platform.\n\n* You work closely together with data scientists to bring machine learning algorithms into a production environment.\n\n\n\n\n\nYour profile\n\n\n\n* You work and think at a Bachelor’s or Master's level.\n\n* You have a minimum of two years experience in a similar position.\n\n* You have knowledge about OO and functional programming in languages such as: Java, Scala and Python (knowledge of several languages is a plus).\n\n* You have knowledge and experience with building and implementing API’s on a large scale.\n\n* You have thorough knowledge of SQL.\n\n* You believe in the principle of ”clean coding”; you don’t just write code for yourself or a computer, but for your colleagues as well.\n\n* You have hands-on experience with technologies such as: Hadoop, Spark, Kafka, Cassandra, HBase, Hive, Elastic, etc.\n\n* You are familiar with the Agile Principles.\n\n* You are driven to keep self-developing and following the latest technologies.\n\n\n\n\n\nAbout Anchormen\n\n\nWe help our clients to use Big Data in a smart way, which leads to new insights, knowledge and efficiency. We advise our clients on designing their Big Data platform. Our consultants provide advice, implement the appropriate products, and create complex algorithms to do the proper analyses and predictions.\n\n\nWhy Anchormen\n\n\nAnchormen has an open working environment. Everyone is open to initiatives. You can be proactive in these, and have every freedom to allow your work to be part of our success. We don’t believe in micro-management, but give our people the freedom to function optimally. Hard work naturally also plays a part – but with enjoyment!\n\n\nWhat we offer\n\n\n\n* Flexibility in working from home.\n\n* Competitive market salary.\n\n* Training and development budget for employees’ personal growth.\n\n* Being part of a fast-growing and innovative company.\n\n* Travel allowance.\n\n* Friendly and cooperative colleagues.\n\n* Daily office fruit and snacks.\n\n* All the coffee you can consume!\n\n\n

See more jobs at Anchormen

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

AdRoll


Lead Big Data Engineer

Lead Big Data Engineer


AdRoll


big data

exec

engineer

big data

exec

engineer

2yr
\nAbout the Role:\n\nAdRoll's data infrastructure processes 100TB of compressed data, 4 trillion events, and 100B real time events daily on a scalable, highly available platform. As a member of the data & analytics team, you will work closely with data engineers, analysts, and data scientists to develop novel systems, algorithms, and processes to handle massive amounts of data using languages such as Python and Java.\n\nResponsibilities:\n\n\n* Develop and operate our data pipeline & infrastructure\n\n* Work closely with analysts and data scientists to develop data-driven dashboards and systems\n\n* Tackle some of the most challenging problems in high-performance, scalable analytics\n\n* Available for after hour issues and the ability to be on call, but aiming incessantly to reduce after hours incidents\n\n* Communicate with Product and Engineering Managers\n\n* Mentor Junior Engineers on the team\n\n\n\n\nQualifications:\n\n\n* A BS or MS degree in Computer Science or Computer Engineering, or equivalent experience\n\n* 4-6 years experience, atleast 2 of which include leading teams\n\n* Experience with scalable systems, large-scale data processing, and ETL pipelines\n\n* Experience with big data technologies such as Hadoop, Hive, Spark, or Storm\n\n* Experience with NoSQL databases such as Redis, Cassandra, or HBase\n\n* Experience with SQL and relational databases such as Postgres or MySQL\n\n* Experience developing and deploying applications on Linux infrastructure\n\n\n\n\nBonus Points:\n\n\n* Knowledge of Amazon EC2 or other cloud-computing services\n\n* Experience with Presto (https://prestodb.io/)\n\n\n\n\nCompensation:\n\n\n* Competitive salary and equity\n\n* Medical / Dental / Vision benefits\n\n* Paid time off and generous holiday schedule\n\n* The opportunity to win the coveted Golden Bagel award\n\n\n

See more jobs at AdRoll

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

AdRoll


Senior Big Data Engineer

Senior Big Data Engineer


AdRoll


big data

senior

engineer

big data

senior

engineer

2yr

Stats (beta): ๐Ÿ‘ 404 views,โœ๏ธ 0 applied (0%)
\nAbout the Role:\n\nAdRoll's data infrastructure processes 100TB of compressed data, 4 trillion events, and 100B real time events daily on a scalable, highly available platform. As a member of the data & analytics team, you will work closely with data engineers, analysts, and data scientists to develop novel systems, algorithms, and processes to handle massive amounts of data using languages such as Python and Java.\n\nResponsibilities:\n\n\n* Develop and operate our data pipeline & infrastructure\n\n* Work closely with analysts and data scientists to develop data-driven dashboards and systems\n\n* Tackle some of the most challenging problems in high-performance, scalable analytics\n\n* Available for after hour issues and the ability to be on call, but aiming incessantly to reduce after hours incidents\n\n* Available to assist Junior Engineers on the team\n\n\n\n\nQualifications:\n\n\n* A BS or MS degree in Computer Science or Computer Engineering, or equivalent experience\n\n* 3+ years experience\n\n* Experience with scalable systems, large-scale data processing, and ETL pipelines\n\n* Experience with big data technologies such as Hadoop, Hive, Spark, or Storm\n\n* Experience with NoSQL databases such as Redis, Cassandra, or HBase\n\n* Experience with SQL and relational databases such as Postgres or MySQL\n\n* Experience developing and deploying applications on Linux infrastructure\n\n\n\n\nBonus Points:\n\n\n* Knowledge of Amazon EC2 or other cloud-computing services\n\n* Experience with Presto (https://prestodb.io/)\n\n\n\n\nCompensation:\n\n\n* Competitive salary and equity\n\n* Medical / Dental / Vision benefits\n\n* Paid time off and generous holiday schedule\n\n* The opportunity to win the coveted Golden Bagel award\n\n\n

See more jobs at AdRoll

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.


Stats (beta): ๐Ÿ‘ 1,025 views,โœ๏ธ 0 applied (0%)
SmileDirectClub is looking for an experienced Data Engineer to help design and scale our data pipelines to help our engineers, operations team, marketing managers, and analysts make better decisions with data. We are looking for engineers that understand that simplicity and reliability are aspects of a system that canโ€™t be tacked on but are carefully calculated with every decision made. If you have experience working on ETL pipelines and love thinking about how data models and schemas should be architected, we want to hear from you.\n\nSmileDirectClub was founded on a simple belief: everyone deserves a smile they love. We are the first digital brand for your smile. The company was built upon a realization that recent trends in 3D printing and telehealth could bring about disruptive change to the invisible aligner market. By leveraging proprietary cutting-edge technology, weโ€™re helping customers avoid office visits and cutting their costs by up to 70 percent because people shouldnโ€™t have to pay a small fortune for a better smile.\n\nYou will:\n\nDesign and build new dimensional data models and schema designs to improve accessibility, efficiency, and quality of internal analytics data\nBuild, monitor, and maintain analytics data ETL pipelines\nImplement systems for tracking data quality and consistency\nWork closely with Analytics, Marketing, Finance, and Operations teams to understand data and analysis requirements\nWork with teams to continue to evolve data models and data flows to enable analytics for decision making (e.g., improve instrumentation, optimize logging, etc.)\nWeโ€™re looking for someone who:\n\nHas a curiosity about how things work\nIs willing to role-up their sleeves to leverage Big Data and discover new key performance indicators\nHas built enterprise data pipelines and can craft clean and beautiful code in SQL, Python, and/or R\nHas built batch data pipelines with Hadoop or Spark as well as with relational database engines, and understands their respective strengths and weaknesses\nHas experience with ETL jobs, metrics, alerting, and/or logging\nHas expert knowledge of query optimization in MPP data warehouses (Redshift, Snowflake, Cloudera, HortonWorks, MapR, or similar)\nExperience in the latest/cutting edge design and development of big data solutions\nProficiency in the latest trends in big data analytics and architecture\nCan jump into situations with few guardrails and make things better\nPossesses strong computer science fundamentals: data structures, algorithms, programming languages, distributed systems, and information retrieval\nIs a strong communicator. Explaining complex technical concepts to product managers, support, and other engineers is no problem for you\nWhen things break, and they will, is eager and able to help fix them\nIs someone that others enjoy working with due to your technical competence and positive attitude\nIs ready to design and create ROLAP, MOLAP, and RDBMS data stores\nHow to stand out against the rest:\n\nAcademic background in computer science or mathematics (BSc or MSc), or demonstrated industry hands-on experience\nExperience with agile development processes\nExperience building simple scripts and web applications using Python, Ruby, or PHP\nA solid grasp of basic statistics (regression, hypothesis testing)\nExperience in small start-up environments\nBenefits:\n\nCompetitive salary\nHealth, vision and dental insurance\n401K plan\nPTO\nDiscounted SmileDirectClub aligner treatment\n \n\nAbout SmileDirectClub:\n\nSmileDirectClub is backed by Camelot Venture Group, a private investment group that has been pioneering the direct-to-consumer industry since the early โ€˜90s, particularly in highly regulated industries. If youโ€™ve heard of 1-800-CONTACTS, Quicken Loans, HearingPlanet, DiabetesCareClub or SongbirdHearing, then youโ€™ve heard of Camelot. Their hands-on approach, extensive networking, and operational expertise ensures their portfolio companies reach their potential.\n\nHaving closed on a $46.7 million capital raise in July 2016 led by Align Technology (NASDAQ: ALGN), owner of the Invisalignยฎ brand, SmileDirectClub is now valued at $275 million and is continuing to grow share in the U.S. orthodontics market.

See more jobs at SmileDirectClub

Visit SmileDirectClub's website

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Hotjar


Big Data Engineer


Valletta

Big Data Engineer


Hotjar

Valletta

amazon

elasticsearch

python

engineer

amazon

elasticsearch

python

engineer

Valletta2yr

Stats (beta): ๐Ÿ‘ 6,329 views,โœ๏ธ 0 applied (0%)
**Note: although this is a remote position, we are currently only seeking candidates in time zones between UTC-2 and UTC+7.**\n\n\n\nHotjar is looking for a driven and ambitious DevOps Engineer with Big Data experience to support and expand our cloud-based infrastructure used by thousands of sites around the world. The Hotjar infrastructure currently processes more than 7500 API requests per second, delivers over a billion pieces of static content every week and hosts databases well into terabyte-size ranges, making this an interesting and challenging opportunity. As Hotjar continues to grow rapidly, we are seeking an engineer who has experience dealing with high traffic cloud based applications and can help Hotjar scale as our traffic multiplies. \n\n\n\nThis is an excellent career opportunity to join a fast growing remote startup in a key position.\n\n\n\nIn this position, you will:\n\n\n\n- Be part of our DevOps team building and maintaining our web application and server environment.\n\n- Choose, deploy and manage tools and technologies to build and support a robust infrastructure.\n\n- Be responsible for identifying bottlenecks and improving performance of all our systems.\n\n- Ensure all necessary monitoring, alerting and backup solutions are in place.\n\n- Do research and keep up to date on trends in big data processing and large scale analytics.\n\n- Implement proof of concept solutions in the form of prototype applications.\n\n\n\n\n\n \n\n \n\n#Salary\n - \n \n\n#Location\n- Valletta

See more jobs at Hotjar

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

SkyTruth


Big Data Engineer

Big Data Engineer


SkyTruth


big data

engineer

big data

engineer

2yr

Stats (beta): ๐Ÿ‘ 807 views,โœ๏ธ 0 applied (0%)
\nThis is an extraordinary opportunity to get to use cutting-edge big data and machine learning tools while doing something good for the planet and open-sourcing all your code.\n\nSkyTruth is seeking an engineer to join the team that is building Global Fishing Watch which is a partnership of SkyTruth, Oceana and Google, supported by Leonardo DiCaprio, and dedicated to saving the world's oceans from ruinous overfishing [Wired],   Our team works directly with Google engineers that support Cloud ML, TensorFlow and DataFlow and we are a featured Google partner.\n\nhttps://cloud.google.com/customers/global-fishing-watch/\n\nhttps://environment.google/projects/fishing-watch/\n\nhttps://blog.google/products/maps/mapping-global-fishing-activity-machine-learning/\n\nYour job is to develop, improve and operationalize the multiple pipelines we use to process terrabytes of vessel tracking data collected by a constellation of satellites.  We have a data set containing billions of vessel position reports, from which we derive behaviors based on movement characteristics using Cloud ML, and publish a dynamically updated map of global commercial fishing activity.\n\nYou will join a fully distributed team of engineers, data scientists and designers who are building and open sourcing the next generation of the product and who are very committed to creating a positive impact in the world while also solving novel problems using cutting edge tools. \n\nThe company is headquartered in Washington DC, the data science team is in San Francisco, and we have engineers in the US, Europe, South America and Indonesia.  Daily scrums are scheduled around east coast US timezone (so that kind of sucks for the guy in Indonesia :-)\n\nBecause this is open to remote work, we will get a lot of applicants. We are not just looking for an engineer with great skills that wants to work with cool tech.  We also want you to be inspired by the project, so please tell us something that excites you about what we're doing when you contact us. \n\nHere's some more stuff you can read about the impact our work has:\n\nNew York Times: Palau vs the Poachers\n\nScience: Ending hide and seek at sea\n\nWashington Post: How Google is helping to crack down on illegal fishing — from space

See more jobs at SkyTruth

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Spinn3r


Java 'big Data' Engineer

Java 'big Data' Engineer


Spinn3r


big data

java

engineer

big data

java

engineer

3yr

Stats (beta): ๐Ÿ‘ 589 views,โœ๏ธ 0 applied (0%)
\nCompany\n\nSpinn3r is a social media and analytics company looking for a talented Java “big data” engineer. \n\nAs a mature, ten (10) year old company, Spinn3r provides high-quality news, blogs and social media data for analytics, search, and social media monitoring companies.   We’ve just recently completed a large business pivot, and we’re in the process of shipping new products so it's an exciting time to come on board!\n\nIdeal Candidate\n\nWe're looking for someone with a passion for technology, big data, and the analysis of vast amounts of content; someone with experience aggregating and delivering data derived from web content, and someone comfortable with a generalist and devops role.  We require that you have a knowledge of standard system administration tasks, and have a firm understanding modern cluster architecture.  \n\nWe’re a San Francisco company, and ideally there should be least a 4 hour overlap with the Pacific Standard Time Zone (PST / UTC-8).  If you don't have a natural time overlap with UTC-8 you should be willing to work an alternative schedule to be able to communicate easily with the rest of the team.  \n\nCulturally, we operate as a “remote” company and require that you’re generally available for communication and are self-motivated and remain productive.\n\nWe are open to either a part-time or full-time independent contractor role.\n\nResponsibilities\n\n\n* Understanding our crawler infrastructure;\n\n* Ensuring top quality metadata for our customers. There's a significant batch job component to analyze the output to ensure top quality data;\n\n\n\n\n\n* Making sure our infrastructure is fast, reliable, fault tolerant, etc.  At times this may involve diving into the source of tools like ActiveMQ to understand how the internals work.  We contribute to Open Source development to give back to the community; and\n\n\n\n\n\n* Building out new products and technology that will directly interface with customers. This includes cool features like full text search, analytics, etc. It's extremely rewarding to build something from ground up and push it to customers directly. \n\n\n\n\nArchitecture\n\nOur infrastructure consists of Java on Linux (Debian/Ubuntu) with the stack running on ActiveMQ, Zookeeper, and Jetty.  We use Ansible to manage our boxes. We have a full-text search engine based on Elasticsearch which also backs our Firehose API.\n\nHere's all the cool products that you get to work with:\n\n\n* Large Linux / Ubuntu cluster running with the OS versioned using both Ansible and our own Debian packages for software distribution;\n\n* Large amounts of data indexed from the web and social media.  We index from 5-20TB of data per month and want to expand to 100TB of data per month; and \n\n* SOLR / Elasticsearch migration / install.  We’re experimenting with bringing this up now so it would be valuable to get your feedback.\n\n\n\n\nTechnical Skills\n\nWe're looking for someone with a number of the following requirements:\n\n\n* Experience in modern Java development and associated tools: Maven, IntelliJ IDEA, Guice (dependency injection);\n\n* A passion for testing, continuous integration, and continuous delivery;\n\n\n\n\n\n* ActiveMQ. Powers our queue server for scheduling crawl work;\n\n\n\n\n\n* A general understanding and passion for distributed systems;\n\n* Ansible or equivalent experience with configuration management; \n\n* Standard web API use and design. (HTTP, JSON, XML, HTML, etc.); and\n\n* Linux, Linux, Linux.  We like Linux!\n\n\n\n\n\nCultural Fit\n\nWe’re a lean startup and very driven by our interaction with customers, as well as their happiness and satisfaction. Our philosophy is that you shouldn’t be afraid to throw away a week's worth of work if our customers aren’t interested in moving in that direction.\n\nWe hold the position that our customers are our responsibility and we try to listen to them intently and consistently:\n\n\n* Proficiency in English is a requirement. Since you will have colleagues in various countries with various primary language skills we all need to use English as our common company language. You must also be able to work with email, draft proposals, etc. Internally we work as a large distributed Open Source project and use tools like email, slack, Google Hangouts, and Skype; \n\n* Familiarity working with a remote team and ability (and desire) to work for a virtual company. Should have a home workstation, and fast Internet access, etc.;\n\n* Must be able to manage your own time and your own projects.  Self-motivated employees will fit in well with the rest of the team; and\n\n* It goes without saying; but being friendly and a team player is very important.\n\n\n\n\nCompensation\n\n\n* Salary based on experience;\n\n* We're a competitive, great company to work for; and\n\n* We offer the ability to work remotely, allowing for a balanced live-work situation.\n\n\n\n\n\n\n

See more jobs at Spinn3r

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Hotjar


Big Data Devops Engineer

Big Data Devops Engineer


Hotjar


big data

devops

engineer

devops

big data

devops

engineer

devops

3yr

Stats (beta): ๐Ÿ‘ 632 views,โœ๏ธ 0 applied (0%)
\nNote: Although this is a remote position, we are currently only seeking candidates in timezones between UTC-2 and UTC+7.\n\nHotjar is looking for a driven and ambitious DevOps Engineer with Big Data experience to support and expand our cloud-based infrastructure used by thousands of sites around the world. The Hotjar infrastructure currently processes more than 7500 API requests per second, delivers over a billionpieces of static content every week and hosts databases well into terabyte-size ranges, making this an interesting and challenging opportunity. As Hotjar continues to grow rapidly, we are seeking an engineer who has experience dealing with high traffic cloud based applications and can help Hotjar scale as our traffic multiplies. \n\nThis is an excellent career opportunity to join a fast growing remote startup in a key position.\n\nIn this position, you will:\n\n\n* Be part of our DevOps team building and maintaining our web application and server environment.\n\n* Choose, deploy and manage tools and technologies to build and support a robust infrastructure.\n\n* Be responsible for identifying bottlenecks and improving performance of all our systems.\n\n* Ensure all necessary monitoring, alerting and backup solutions are in place.\n\n* Do research and keep up to date on trends in big data processing and large scale analytics.\n\n* Implement proof of concept solutions in the form of prototype applications.\n\n\n

See more jobs at Hotjar

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.


Stats (beta): ๐Ÿ‘ 1,794 views,โœ๏ธ 0 applied (0%)
The Big Data Services engineering team is responsible for providing software tools, platforms and APIs for collecting and processing large datasets, complete with search, analytics & real-time pipeline processing capabilities to address the unique challenges of our industry. We are building large distributed systems that will be the heart of data architecture to serve billions of requests, provide search & analytics across structured, semi-structured datasets, and scale out to tens of terabytes while maintaining low latency & availability & immediate discoverability by clients. We are reimagining the way we architect our data infrastructure across the company and are looking for an experienced software engineer to help. If solving intricate engineering issues with distributed systems, platform API, real-time big-data pipeline and search & discovery query patterns are your calling, we would like to hear from you.\n\nMajor Responsibilities:\n-Develop and maintain internal Big Data services and tools\n-Leverage Service Oriented Architecture to create APIโ€™s, libraries and frameworks that our Studios will use\n-Help building the real-time Data Platform to support our games\n-Design & build data processing architecture in AWS\n-Design, support and build data pipelines\n-Develop ETL in distributed processing environment\n\nWhat You Need for this Position:\n-Bachelor's degree in technical field (e.g., MIS, Computer Science, Engineering, or a related field of study)\n-The ideal candidate should have full-stack experience, as youโ€™ll be delivering data and analytics solutions for business, analytics and technology groups across the organization\n-Minimum of 3 years of demonstrated experience with object-oriented programming (Java)\n-Working knowledge of Python\n-Experience in Go (Golang) is a huge plus\n-Advanced skills in Linux shell and SQL are required\n-Background with databases\n-Experience in Data Modeling/Integration and designing REST based API's for consumer based services is a plus\n-Good knowledge of open source technologies and DevOps paradigm

See more jobs at Peak Games

Visit Peak Games's website

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Hazelcast


Big Data Engineer

Big Data Engineer


Hazelcast


engineer

big data

engineer

big data

3yr

Stats (beta): ๐Ÿ‘ 788 views,โœ๏ธ 0 applied (0%)
\nWould you like to work on a new and exciting big data project? Do you enjoy any of the following?\n\n\n* Solving complex problems around distributed data processing.\n\n* Implementing non-trivial infrastructure code.\n\n* Creating well crafted and thoroughly tested features, taking full-responsibility from the design phase.\n\n* Paying attention to all aspects of code quality, from clean-code, to allocation-rates.\n\n* Digging into mechanical sympathy concepts.\n\n* Delivering a technical presentation at a conference.\n\n\n\n\nAt Hazelcast you will have the opportunity to work with some of the best engineers out there:\n\n\n* Who delve into JVM code.\n\n* Who implement and scrutinize garbage collection algorithms.\n\n* Who take any piece of software and multiply its performance by applying deep technical understanding. \n\n* Who regularly squash bugs in the depths of a JVM\n\n\n\n\nWe are looking for people who can deliver solid production code. You may either work in our office in London, Istanbul or code remotely from a home office. It is also preferable that you are within a few hours of the CET timezone as this is where most of the developers are based.

See more jobs at Hazelcast

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Clear Returns


Big Data Engineer For Glasgow Based Start Up

Big Data Engineer For Glasgow Based Start Up


Clear Returns


engineer

big data

engineer

big data

3yr

Stats (beta): ๐Ÿ‘ 401 views,โœ๏ธ 0 applied (0%)
\nThis is an exciting opportunity that incorporates managing all technical and engineering aspects of the analytics infrastructure. You will maintain and improve the data warehouse and ensure that data obtained from diverse and varied sources is appropriately captured, cleaned, and utilised.  As a member of a small team you could also get the opportunity to be involved in Data Science projects, though this is not a prerequisite of the role. You will have significant influence on the future of the technologies used by the team which is central to the company’s growth strategy.

See more jobs at Clear Returns

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Crossover


Senior Big Data Software Engineer $90K 100% Position

Senior Big Data Software Engineer $90K 100% Position


Crossover


senior

engineer

big data

dev

senior

engineer

big data

dev

4yr

Stats (beta): ๐Ÿ‘ 864 views,โœ๏ธ 0 applied (0%)
\nAre you a Senior Software Engineer that has spent several years working with Big Data technologies? Have you created streaming analytics algorithms to process terabytes of real-time data and deployed Hadoop or Cassandra clusters over dozens of VMs? Have you been part of an organization driven by a DevOps culture, where the engineer has end to end responsibility for the product, from development to operating it in production? Are you willing to join a team of elite engineers working on a fast growing analytics business? Then this role is for you! \n \nJob Description \nThe Software and DevOps engineer will help build the analytics platform for Bazaarvoice data that will power our client-facing reporting, product performance reporting, and financial reporting. You will also help us operationalize our Hadoop clusters, Kafka and Storm services, high volume event collectors, and build out improvements to our custom analytics job portal in support of Map/Reduce and Spark jobs. The Analytics Platform is used to aggregate data sets to build out known new product offerings related to analytics and media as well as a number of pilot initiatives based on this data. You will need to understand the business cases of the various products and build a common platform and set of services that help all of our products move fast and iterate quickly. You will help us pick and choose the right technologies for this platform. \nKey Responsibilities \n \nIn your first 90 days you can expect the following: \n * \nAn overview of our Big Data platform code base and development model \n * \nA tour of the products and technologies leveraging the Big Data Analytics Platform \n * \n4 days of Cloudera training to provide a quick ramp up of the technologies involved \n * \nBy the end of the 90 days, you will be able to complete basic enhancements to code supporting large-scale analytics using Map/Reduce as well as contribute to the operational maintenance of a high volume event collection pipeline. \n \n \n \nWithin the first year you will: \n * \nOwn design, implementation, and support of major components of platform development. This includes working with the various stakeholders for the platform team to understand their requirements and delivery high leverage capabilities. \n * \nHave a complete grasp of the technology stack, and help guide where we go next. \n \n

See more jobs at Crossover

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Instructure


Senior Software Engineer Big Data

Senior Software Engineer Big Data


Instructure


senior

engineer

big data

dev

senior

engineer

big data

dev

4yr

Stats (beta): ๐Ÿ‘ 793 views,โœ๏ธ 0 applied (0%)
Instructure was founded to define, develop, and deploy superior, easy-to-use software. (And that’s what we did / do / will keep on doing.) We are dedicated to the fight against iffy, mothbally, shoddy software. We make better, more usable tools for teaching and learning (you know, stuff people will actually use). A better connected and more open edtech ecosystem. And more effective ways for everyone everywhere to access education, make discoveries, share knowledge, be inspired, and do big things. We accomplish all this by giving smart, creative, passionate people opportunities to create awesome. So here’s your opportunity.\n\nWe are hiring engineers passionate about using data to gain insight, drive behavior and improve our products. Our software helps millions of users learn and grow. Come help accelerate the learning process by developing data centric features for K-12, higher education and corporate users.\n\n\n\n\n\nWHAT YOU WILL BE DOING:\n\n\n\n\n* The Instructure suite of SaaS applications produces terabytes of events and student information weekly. Your challenge will be to create the systems that organize this data and return insights to students, teachers and administrators. You will also work to integrate data driven features into core Instructure products. \n\n* This team engineers the data and analytics platform for the entire Instructure application portfolio. This is a growing team at Instructure with the opportunity to provide tangible positive impact to the business and end users. We are looking for creative, self-motivated, highly collaborative, extremely technical people who can drive a vision to reality.\n\n\n\n\n

See more jobs at Instructure

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Convertro


Big Data Engineer

Big Data Engineer


Convertro


big data

engineer

big data

engineer

4yr

Stats (beta): ๐Ÿ‘ 594 views,โœ๏ธ 0 applied (0%)
\nDo you want to solve real-world business problems with cutting edge technology in a creative and exciting start-up? Are you a smart person who gets stuff done?\n \n Convertro is looking for you. We are hiring an engineer with experience building analytical systems in Map Reduce, Hadoop, Hbase, or similar distributed systems programming. You will improve the scalability, flexibility, and stability of our existing Hadoop architecture as well as help develop our next generation data analytics platform. You will rapidly create prototypes and quickly iterate to a stable, production-quality release candidate.

See more jobs at Convertro

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

American Express


Big Data Engineer

Big Data Engineer


American Express


engineer

big data

engineer

big data

4yr

Stats (beta): ๐Ÿ‘ 699 views,โœ๏ธ 0 applied (0%)
\nAmerican Express is looking for energetic, high-performing software engineers to help shape our technology and product roadmap. You will be part of the fast-paced big data team. As a part of the Customer Marketing and Big Data Platforms organization, that enables Big Data and batch/real-time analytical solutions leveraging transformational technologies (Hadoop, HDFS, MapReduce, Hive, HBase, Pig, etc.) you will be working on innovative platform and data science projects across multiple business units (e.g., RIM, GNICS, OPEN, CS, EG, GMS, etc.). Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.\n\n\n\nOffer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.\nQualifications\nยทHands-on expertise with application design, software development, and automated testing - Experience collaborating with the business to drive requirements/Agile story analysis \n\nยทAbility to effectively interpret technical and business objectives and challenges, and articulate solutions \n\nยทAbility to think abstractly and deal with ambiguous/under-defined problems - Ability to enable business capabilities through innovation \n\nยทLooks proactively beyond the obvious for continuous improvement opportunities \n\nยทHigh energy, demonstrated willingness to learn new technologies, and takes pride in how fast they develop working software \n\nยทStrong programming knowledge in C++ / Java \n\nยทSolid understanding of data structures and common algorithms \n\nยทKnowledge of RDBMS concepts and experience with SQL \n\nยทUnderstanding and experience with UNIX / Shell / Perl / Python scripting \n\nยทExperience in Big Data Components/ Frameworks (Hadoop, HBase, HDFS, Pig, Hive, Sqoop, Flume, Ozie, Avro, etc.) and other AJAX tools/Framework \n\nยทDatabase query optimization and indexing Bonus skills: Object-oriented design and coding with variety of languages: Java, J2EE and Parallel and distributed system \n\nยทMachine learning/data mining \n\nยทWeb services design and implementation using REST / SOAP - Bug-tracking, source control, and build system\n\nAmerican Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other status protected by law. Click here to view the 'EEO is the Law' poster.\n\n\n\nReqID: 15017390

See more jobs at American Express

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Hearst Corporation


Full Stack Big Data Engineer

Full Stack Big Data Engineer


Hearst Corporation


engineer

full stack

big data

engineer

full stack

big data

4yr

Stats (beta): ๐Ÿ‘ 952 views,โœ๏ธ 0 applied (0%)
\nWe’re creating a game-changing modern content platform - built from the ground up.  It will give our users, editors, and advertisers tools that enable them to react to the world in real-time in making decisions around content publishing and revenue generation. We are doing this by working with Big Data scientists to build a modern information pipeline to enable intelligent and optimized media applications.  We’re using modern web technologies to do this. We’re building an open, service-oriented platform driven by APIs, and believe passionately in crafting simple, elegant solutions to complex technological and product problems.  Our day to day is much like a technology start-up company - with the strong support of a large corporation that believes in what we're doing.\n\nWe’re hiring talented and passionate Software Engineers to be part of a corporate open-source movement in the company to build out our new platform. The ideal candidate has extensive experience writing clean object-oriented code, building and working with RESTful APIs, has worked in cloud based environments like AWS and likes being part of a collaborative tech team.\n\nWe consistently hold ourselves to high standards of software development, code review and deployment.  Our workflow embraces automated testing and continuous integration.  We work closely with our DevOps team to allow for developers to focus on what they do best - creatively build innovative software solutions.

See more jobs at Hearst Corporation

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Turbine WB Games


Senior Big Data Engineer

Senior Big Data Engineer


Turbine WB Games


senior

engineer

big data

senior

engineer

big data

4yr

Stats (beta): ๐Ÿ‘ 632 views,โœ๏ธ 0 applied (0%)
\nWBPlay – a team within Turbine that is responsible for delivering key technology platforms that support games across WB – is seeking a Senior Big Data Engineer to provide hands on development within our Core Analytics Platform team. As a key contributor reporting directly to the Director of Analytics Platform Development, this individual will work closely with developers and dev-ops engineers across multiple teams to build and operate a best-in-class game analytics platform.\n\n\nThe successful candidate will participate in software development and dev-ops projects, using Agile methodologies, to build scalable, reliable technologies and infrastructure for our cross-game data analytics platform. This big data platform powers analytics for WB’s games across multiple networks, devices and operating environments, including Xbox One, PS4, IOS, and Android. This is a role combining proven technical skills in various Big Data ecosystems, with a strong focus on open-source (Apache) software and cloud (AWS) infrastructure.\n\n\nOur ideal candidate is fluent in several big data technologies - including Hadoop, Spark, MPP databases, and NoSQL databases - and has deep experience in implementation of complex distributed computing environments which ingest, process, and surface hundreds of terabytes of data from dozens of sources, in near real time, for analysis by data scientists and other stakeholders.\n\nJOB RESPONSIBILITIES\n\n\n* \n\nResponsible for the building, deployment, and maintenance of mission critical analytics solutions that process data quickly at big data scales\n\n\n* \n\nContributes design, code, configurations, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, and loading across multiple game franchises.\n\n\n* \n\nOwns one or more key components of the infrastructure and works to continually improve it, identifying gaps and improving the platform’s quality, robustness, maintainability, and speed.\n\n\n* \n\nCross-trains other team members on technologies being developed, while also continuously learning new technologies from other team members.\n\n\n* \n\nInteracts with engineering teams across WB and ensures that solutions meet customer requirements in terms of functionality, performance, availability, scalability, and reliability.\n\n\n* \n\nPerforms development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions.\n\n\n* \n\nWorks directly with business analysts and data scientists to understand and support their usecases\n\n\n\n

See more jobs at Turbine WB Games

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Verizon


Big Data Platform Engineer

Big Data Platform Engineer


Verizon


big data

engineer

big data

engineer

4yr

Stats (beta): ๐Ÿ‘ 564 views,โœ๏ธ 0 applied (0%)
\nGrow your IT career at one of the leading global technology companies. We offer hands-on exposure to state-of-the-art systems, applications and infrastructures.\n\nResponsibilities\n\n\n* Architect, Design and build big data platform primarily based on Hadoop echo system that is fault-tolerant & scalable.\n\n* Build high throughput messaging framework to transport high volume data.\n\n* Use different protocols as needed for different data services (NoSQL/JSON/REST/JMS).\n\n* Develop framework to deploy Restful web services.\n\n* Build ETL, distributed caching, transactional and messaging services.\n\n* Architect and build security compliant user management framework for multitenant big data platform.\n\n* Build High-Availability (HA) architectures and deployments primarily using big data technologies.\n\n* Creating and managing Data Pipelines.\n\n\n

See more jobs at Verizon

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Verizon


Big Data Application Engineer

Big Data Application Engineer


Verizon


big data

engineer

big data

engineer

4yr

Stats (beta): ๐Ÿ‘ 659 views,โœ๏ธ 0 applied (0%)
\nStay on the front lines of groundbreaking technology. Were committed to a dynamic, ever-evolving infrastructure and the hard work it takes to keep our reliable network thriving. Help support the growing demands of an interconnected world.\n\nResponsibilities\n\nVerizon Corporate Technology's Big Data Group is looking for Big Data engineers with expert level experience in architecting and building our new Hadoop, NoSql, InMemory Platforms(s) and data collectors. You will be part of the team building worlds one of the largest Big Data Platform(s) that can ingest 100’s of Terabytes of data that will be consumed for Business Analytics, Operational Analytics, Text Analytics, Data Services and build Big Data Solutions for various Verizon Business units\n\nThis is a unique opportunity to be part of building disruptive technology where Big Data will be used as platform to build solutions for Analytics, Data Services and Solutions.\n\nResponsibility :\n\n\n* Hands on contribution to biz logic using Hadoop echo system (Java MR, PIG, Scala, Hbase, Hive)\n\n* Work on technologies related to NoSQL, SQL and InMemory platform(s)\n\n\n

See more jobs at Verizon

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Verizon


Principal Big Data Platform Engineer

Principal Big Data Platform Engineer


Verizon


big data

engineer

big data

engineer

4yr

Stats (beta): ๐Ÿ‘ 529 views,โœ๏ธ 0 applied (0%)
\nGrow your IT career at one of the leading global technology companies. We offer hands-on exposure to state-of-the-art systems, applications and infrastructures.\n\nResponsibilities\n\nVerizon Corporate Technology's Big Data Group is looking for Big Data engineers with expert level experience in building our new Hadoop, NoSql, InMemory Platforms(s) ,data collectors and applications. You will be part of the team building worlds one of the largest Big Data Platform(s) that can ingest 100’s of Terabytes of data that will be consumed for Business Analytics, Operational Analytics, Text Analytics, Data Services and build Big Data Solutions for various Verizon Business units\n\nResponsibility:\n\n\n* Architect, Design and build big data platform primarily based on Hadoop echo system that is fault-tolerant & scalable.\n\n* Build high throughput messaging framework to transport high volume data.\n\n* Responsible to provide guidance to members on the team to build complex high throughput big data subsystems.\n\n* Use different protocols as needed for different data services (NoSQL/JSON/REST/JMS).\n\n* Develop framework to deploy Restful web services.\n\n* Build ETL, distributed caching, transactional and messaging services.\n\n* Architect and build security compliant user management framework for multitenant big data platform.\n\n* Build High-Availability (HA) architectures and deployments primarily using big data technologies.\n\n* Expert level experience with Hadoop echo system ( Spark, Hbase, Solr).\n\n* Creating and managing Data Pipelines.\n\n\n

See more jobs at Verizon

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Jet.com


Big Data Engineer

Big Data Engineer


Jet.com


engineer

big data

engineer

big data

4yr

Stats (beta): ๐Ÿ‘ 746 views,โœ๏ธ 0 applied (0%)
\n“Engineers are Astronauts at Jet”\n- Mike Hanrahan, Jet’s CTO\n\n\nYou'll be responsible for helping to build a world class data platform to collect, process, and manage a vast amount of information generated by Jet's rapidly growing business.\n\n\n\nAbout Jet\n\nJet’s mission is to become the smartest way to shop and save on pretty much anything. Combining a revolutionary pricing engine, a world-class technology and fulfillment platform, and incredible customer service, we’ve set out to create a new kind of e-commerce.  At Jet, we’re passionate about empowering people to live and work brilliant.\n\n\nAbout Jet’s Internal Engine\n\nWe’re building a new kind of company, and we’re building it from the inside out, which means that investing in hiring, developing, and retaining the brightest minds in the world is a top priority. Everything we do is grounded in three simple values:  trust, transparency, and fairness.  From our business model to our culture, we live our values to the extreme, whether we’re dealing with employees, retail partners, or consumers.  We believe that happiness is the highest level of success and we want every person that crosses paths with Jet to achieve it.  If you’re an ambitious, smart, natural collaborator who likes taking risks, influencing, and innovating in a challenging hyper-growth environment, we’d love to talk to you about joining our team.\n\n\nAbout the Job\n\nWe are looking for an exceptional Data Engineer to help build a world class analytical platform to collect, store and expose both structured and un-structured data generated by a vastly growing system landscape at Jet.com.\n\nYou can expect a freewheeling, informal work environment, populated by a combination of folks from top companies that have produced many successful products, as well as some PhD’s that have escaped the ivory tower.\n\nWe have lots of perks like free lunches, but you will be so engrossed with the challenges of the job that the free stuff will be more like icing on the cake.\n\nBecause we work on cutting edge technologies, we need someone who is a creative problem solver, resourceful in getting things done, and productive working independently or collaboratively. This person would take on the following responsibilities:\n\n\n* Design, implement and manage a near real-time ingestion pipeline into a data warehouse and Hadoop data lake.\n\n* Gather and process raw data at scale - collect data across all business domains (our functional-first, event sourced, micro services backend) and expose mechanisms for large scale parallel processing\n\n* Process unstructured data into a form suitable for analysis and then empower state-of-the-art analysis for analysts, scientists, and APIs.\n\n* Support business decisions with ad hoc analysis as needed.\n\n* Evangelize an extremely high standard of code quality, system reliability, and performance.\n\n* Influence cross functional architecture in sprint planning.\n\n\n\n\nAbout You\n\n\n* Experience in running, using and trouble shooting the Apache Big Data stack i.e. Hadoop FS, Hive, HBase, Kafka, Pig, Oozie, Yarn.\n\n* Programming experience, ideally in Scala or F# but we are open to other experience if you’re willing to learn the languages we use.\n\n* Proficient scripting skills i.e. unix shell and/or powershell\n\n* Experience processing large amounts of structured and unstructured data with MapReduce.\n\n* We use Azure extensively, so experience with cloud infrastructure will help you hit the ground running.\n\n\n\n\nCompensation Philosophy\n\nOur compensation philosophy is simple but powerful. Give everyone a meaningful stake in the company—the purest form of ownership. That’s why on top of base salary, Jet’s comp structure is heavily weighted in equity. Our collective hard work, high performance, and tenure are rewarded as our equity builds in value.\n\n\nBenefits & Perks\n\nCompetitive Salaries.  Real Ownership in the form of Stock Options.  Unlimited Vacation.  Full Healthcare Benefits.  Exceptional Work Environment.  Learning & Development Opportunities.  Just for fun Networking & Events.

See more jobs at Jet.com

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Criteo


Senior Software Engineer Java Big Data

Senior Software Engineer Java Big Data


Criteo


java

senior

engineer

big data

java

senior

engineer

big data

4yr

Stats (beta): ๐Ÿ‘ 817 views,โœ๏ธ 0 applied (0%)
\nCRITEO is looking to recruit senior software developers who turn it up to eleven for its R&D Center in Grenoble (South-East from France). Your main missions will be to :\n\n- Build systems that make the best decision in 50ms, half a million times per second. Across three continents and six datacenters, 24/7.\n\n- Find the signal hidden in tens of TB of data, in one hour, using over a thousand nodes on our Hadoop cluster. And constantly keep getting better at it while measuring the impact on our business.\n\n- Get stuff done. A problem partially solved today is better than a perfect solution next year. Have an idea during the night ? Code it in the morning, push it at noon, test it in the afternoon and deploy it the next morning.\n\n- High stakes, high rewards: 1% increase in performance may yield millions for the company. But if a single bug goes through, the Internet goes down (we’re only half joking).\n\n- Develop open source projects. Because we are working at the forefront of technology, we are dealing with problems that few have faced. We’re big users of open source, and we’d like to give back to the community.

See more jobs at Criteo

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

nugg.ad AG predictive behavioral targeting


Big Data Engineer

Big Data Engineer


nugg.ad AG predictive behavioral targeting


engineer

big data

engineer

big data

4yr

Stats (beta): ๐Ÿ‘ 488 views,โœ๏ธ 0 applied (0%)
\nWe are currently building our next generation data management platform and are searching for enthusiastic developers eager to join our team and push back the frontiers of big data processing in high-throughput architectures. Take the unique opportunity to shape and grow an early stage product which will have a significant impact across the advertising market.\n\nAs our Big-Data Engineer you will: \n\n\n* Design and build the core of our new platform\n\n* Identify and deploy the latest big data technologies that suit our challenges\n\n* Define new features and products together with our data-science, consulting and sales teams\n\n* Migrate existing solutions to our Spark/Scala-based architecture\n\n\n

See more jobs at nugg.ad AG predictive behavioral targeting

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.


Stats (beta): ๐Ÿ‘ 4,950 views,โœ๏ธ 0 applied (0%)
#ABOUT US\nWe're a London based startup that is building an economy around people's data and attention. In short, weโ€™re creating a digital marketplace where consumers can dynamically license their personal data and attention to brands in return for a payment.\n\nOur tech stack currently includes: Node (Heroku), ReactJS and AngularJS (Firebase), Express, Mongoose, SuperTest, MongoDB (MongoLab), npm (npmjs). Our distributed development team covers the development of the responsive web, mobile and browser extension products. \n\nWe've recently completed the functional MVP and will be pushing on towards our closed-beta launch at the end of January.\n\n#ABOUT YOU\nWe're looking for a freelance dev-ops person who has significant experience configuring, managing, and monitoring servers and backend services at scale to support our core development team.\n\n\n#COME HELP US WITH PROJECTS LIKE...\n- Review our platform architecture requirements and deploy a well documented, secure and scalable cloud based solution\n- Tighten up security of our servers\n- Setup autoscaling of our workers\n- Make our deployments faster and safer\n- Scale our MongoDB clusters to support our growing data sizes\n- Improve API performance\n- Automate more processes\n- Make sure our backup and recovery procedures are well tested\n- Implement a centralized logging system\n- Instrument our application with more metrics and create dashboards\n- Remove single points of failure in our architecture\n\n\n#YOU SHOULD...\n- Have real world experience building scalable systems, working with large data sets, and troubleshooting various back-end challenges under pressure\n- Experience configuring monitoring, logging, and other tools to provide visibility and actionable alerts\n- Understand the full web stack, networking, and low level Unix computing\n- Always be thinking of ways improve reliability, performance, and scalability of an infrastructure\n- Be self-motivated and comfortable with responsibility\n\n\n#WHY WORK WITH US?\n\nWork remotely from anywhere in the world, or from our HQ in London, UK. Just be willing to do a bit of traveling every quarter for some face-to-face time with the whole team.\nBe involved in an early-stage, fast growth startup that has already received national press coverage\n\n\nExtra tags: Devops, AppSec, NodeJS, Cloud, Mongodb, API, Sys Admin, Engineer, Backend, Freelance, Consultant, security, big data, startup

See more jobs at C8

Visit C8's website

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Crossover


Senior Big Data Software Engineer $90K

Senior Big Data Software Engineer $90K


Crossover


senior

engineer

big data

dev

senior

engineer

big data

dev

4yr

Stats (beta): ๐Ÿ‘ 635 views,โœ๏ธ 0 applied (0%)
\nAre you a Senior Software Engineer that has spent several years working with Big Data technologies? Have you created streaming analytics algorithms to process terabytes of real-time data and deployed Hadoop or Cassandra clusters over dozens of VMs? Have you been part of an organization driven by a DevOps culture, where the engineer has end to end responsibility for the product, from development to operating it in production? Are you willing to join a team of elite engineers working on a fast growing analytics business? Then this role is for you! \n \nJob Description \nThe Software and DevOps engineer will help build the analytics platform for Bazaarvoice data that will power our client-facing reporting, product performance reporting, and financial reporting. You will also help us operationalize our Hadoop clusters, Kafka and Storm services, high volume event collectors, and build out improvements to our custom analytics job portal in support of Map/Reduce and Spark jobs. The Analytics Platform is used to aggregate data sets to build out known new product offerings related to analytics and media as well as a number of pilot initiatives based on this data. You will need to understand the business cases of the various products and build a common platform and set of services that help all of our products move fast and iterate quickly. You will help us pick and choose the right technologies for this platform. \n \nKey Responsibilities \nIn your first 90 days you can expect the following: \n * \nAn overview of our Big Data platform code base and development model \n * \nA tour of the products and technologies leveraging the Big Data Analytics Platform \n * \n4 days of Cloudera training to provide a quick ramp up of the technologies involved \n * \nBy the end of the 90 days, you will be able to complete basic enhancements to code supporting large-scale analytics using Map/Reduce as well as contribute to the operational maintenance of a high volume event collection pipeline. \n \n \nWithin the first year you will: \n * \nOwn design, implementation, and support of major components of platform development. This includes working with the various stakeholders for the platform team to understand their requirements and delivery high leverage capabilities. \n * Have a complete grasp of the technology stack, and help guide where we go next.\n \n

See more jobs at Crossover

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Crossover


Senior Big Data Software Engineer

Senior Big Data Software Engineer


Crossover


senior

engineer

big data

dev

senior

engineer

big data

dev

4yr

Stats (beta): ๐Ÿ‘ 2,084 views,โœ๏ธ 0 applied (0%)
\nAre you a Senior Software Engineer that has spent several years working with Big Data technologies? Have you created streaming analytics algorithms to process terabytes of real-time data and deployed Hadoop or Cassandra clusters over dozens of VMs? Have you been part of an organization driven by a DevOps culture, where the engineer has end to end responsibility for the product, from development to operating it in production? Are you willing to join a team of elite engineers working on a fast growing analytics business? Then this role is for you!\n\nJob Description\n\nThe Software and DevOps engineer will help build the analytics platform for Bazaarvoice data that will power our client-facing reporting, product performance reporting, and financial reporting. You will also help us operationalize our Hadoop clusters, Kafka and Storm services, high volume event collectors, and build out improvements to our custom analytics job portal in support of Map/Reduce and Spark jobs. The Analytics Platform is used to aggregate data sets to build out known new product offerings related to analytics and media as well as a number of pilot initiatives based on this data. You will need to understand the business cases of the various products and build a common platform and set of services that help all of our products move fast and iterate quickly. You will help us pick and choose the right technologies for this platform.\n\nKey Responsibilities\n\nIn your first 90 days you can expect the following:\n\n\n* An overview of our Big Data platform code base and development model\n\n* A tour of the products and technologies leveraging the Big Data Analytics Platform\n\n* 4 days of Cloudera training to provide a quick ramp up of the technologies involved\n\n* By the end of the 90 days, you will be able to complete basic enhancements to code supporting large-scale analytics using Map/Reduce as well as contribute to the operational maintenance of a high volume event collection pipeline.\n\n\n\n\nWithin the first year you will:\n\n\n* Own design, implementation, and support of major components of platform development. This includes working with the various stakeholders for the platform team to understand their requirements and delivery high leverage capabilities.\n\n* Have a complete grasp of the technology stack, and help guide where we go next.\n\n\n

See more jobs at Crossover

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Moz


Software Engineer Big Data

Software Engineer Big Data


Moz


engineer

dev

big data

digital nomad

engineer

dev

big data

digital nomad

4yr

Stats (beta): ๐Ÿ‘ 594 views,โœ๏ธ 0 applied (0%)
Full Time: Sr. Software Engineer- Big Data at Moz in Seattle, WA or Remote

See more jobs at Moz

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Crossover


Senior Big Data Software Engineer $60K

Senior Big Data Software Engineer $60K


Crossover


senior

engineer

big data

dev

senior

engineer

big data

dev

4yr

Stats (beta): ๐Ÿ‘ 712 views,โœ๏ธ 0 applied (0%)
\nAre you a Senior Software Engineer that has spent several years working with Big Data technologies? Have you created streaming analytics algorithms to process terabytes of real-time data and deployed Hadoop or Cassandra clusters over dozens of VMs? Have you been part of an organization driven by a DevOps culture, where the engineer has end to end responsibility for the product, from development to operating it in production? Are you willing to join a team of elite engineers working on a fast growing analytics business? Then this role is for you!\n\n\n\nJob Description\n\nThe Software and DevOps engineer will help build the analytics platform for Bazaarvoice data that will power our client-facing reporting, product performance reporting, and financial reporting.  You will also help us operationalize our Hadoop clusters, Kafka and Storm services, high volume event collectors, and build out improvements to our custom analytics job portal in support of Map/Reduce and Spark jobs.  The Analytics Platform is used to aggregate data sets to build out known new product offerings related to analytics and media as well as a number of pilot initiatives based on this data. You will need to understand the business cases of the various products and build a common platform and set of services that help all of our products move fast and iterate quickly. You will help us pick and choose the right technologies for this platform.\n\n\n\nKey Responsibilities\n\n\n\nIn your first 90 days you can expect the following:\n\n\n* \n\nAn overview of our Big Data platform code base and development model\n\n\n* \n\nA tour of the products and technologies leveraging the Big Data Analytics Platform\n\n\n* \n\n4 days of Cloudera training to provide a quick ramp up of the technologies involved\n\n\n* \n\nBy the end of the 90 days, you will be able to complete basic enhancements to code supporting large-scale analytics using Map/Reduce as well as contribute to the operational maintenance of a high volume event collection pipeline.\n\n\n\n\n\n\nWithin the first year you will:\n\n\n* \n\nOwn design, implementation, and support of major components of platform development. This includes working with the various stakeholders for the platform team to understand their requirements and delivery high leverage capabilities.\n\n\n* \n\nHave a complete grasp of the technology stack, and help guide where we go next.\n\n\n\n\n\n\nBazaarvoice is a network that connects brands and retailers to the authentic voices of people where they shop. Each month, more than 500 million people view and share authentic opinions, questions and experiences about tens of millions of products in the Bazaarvoice network. Our technology platform amplifies these voices into the places that influence purchase decisions. Network analytics help marketers and advertisers provide more engaging experiences that drive brand awareness, consideration, sales and loyalty. Headquartered in Austin, Texas, Bazaarvoice has offices in Chicago, London, Munich, New York, Paris, San Francisco, Singapore, and Sydney.\n\nTotal Compensation is $30 / hour

See more jobs at Crossover

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

MetricStory


Engineer #1 With Big Data Emphasis Funded Startup

Engineer #1 With Big Data Emphasis Funded Startup


MetricStory


javascript

node js

engineer

big data

javascript

node js

engineer

big data

4yr

Stats (beta): ๐Ÿ‘ 1,068 views,โœ๏ธ 0 applied (0%)
\nMetricStory is revolutionizing web analytics. Currently, it is painful to setup web analytics, create reports, and finally get insights out of the reports. Our goal is to make it easy for companies to capture and analyze customer data without having to code. To do this, we are storing and analyzing the full user clickstream. We are a recent Techstars funded company and we have expert domain knowledge in analytics. You'll be our first engineer and have real ownership, responsibility, and impact on the business. The perfect candidate is a senior / lead level engineer with a few years experience in building product and loves architecting complex systems. This position requires solving hard problems and is focused on writing scalable code to capture and analyze big data.\n\nWe are looking for an engineer that has experience and is passionate with storing large volumes of data and retrieving this data in seconds. The ideal candidate will have experience in storing large amounts of event data in a NoSQL database like DynamoDB and exporting/cleaning with Amazon EMR (HiveQL) to RedShift for fast access. This position requires working knowledge of best database structures for speed, large data sets, data cleaning, and how to transfer NoSQL data to SQL. If you are up for a serious technical challenge to help build this company from the ground up, then contact us!\n\nOur stack is NodeJS, DynamoDB, MongoDB, D3.js, Angular, Redis, Amazon Redshift, and plain vanilla Javascript.

See more jobs at MetricStory

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Showroom Logic


Big Data Engineer

Big Data Engineer


Showroom Logic


engineer

big data

engineer

big data

4yr

Stats (beta): ๐Ÿ‘ 769 views,โœ๏ธ 0 applied (0%)
\nShowroom Logic is the 26th fastest-growing company in America. It powers paid-search, display & retargeting campaigns for thousands of auto dealerships nationwide with it's industry leading AdLogic platform. Our dev team is an elite group of individuals who love creating solutions to complex technical problems. Our full-time devs enjoy benefits for them and their families, very competitive salaries, periodic trips to Miami or Southern California, the flexibility of telecommuting, an extremely high level of trust, fun and skill-stretching projects. We are changing the way advertisers manage their digital marketing with our award-winning technology.\n\nPosition Summary:\n\nWe are looking for a Data Engineer to be responsible for retrieving, validating, analyzing, processing, cleansing, and managing of external data and internal data sources. This is not just a data warehousing position—a critical function of this job is to design and implement optimal ways to manage and analyze data. The Data Engineer is expected to learn existing processes, learn and apply 'Big Data' tools, and apply software development skills for automating processes, creating tools, and modifying existing processes for increased efficiency and scalability. \n\nKey functions include:\n\n\n* Developing tools for data processing and information retrieval (both batch processing and real-time querying)\n\n* Support existing projects where evaluating and providing data quality is vital to the product development process\n\n* Analyzing, processing, evaluating and documenting very large data sets\n\n* Providing RESTful APIs that other teams can use to store and retrieve data\n\n\n

See more jobs at Showroom Logic

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Tapway


Big Data Engineer Python Developer


Malaysia

Big Data Engineer Python Developer


Tapway

Malaysia

python

engineer

big data

dev

python

engineer

big data

dev

Malaysia5yr

Stats (beta): ๐Ÿ‘ 8,038 views,โœ๏ธ 0 applied (0%)
Job Description:\n\n- Lead, design, develop and implement large-scale, real-time data processing systems by working with large structured and unstructured data from various complex sources.\n- Design, implement and deploy ETL to load data into NoSQL / Hadoop.\n- Performance fine-tuning of the data processing platform\n- Development of various APIโ€™s to interact with front-end and other data warehouses\n- Coordinate with web programmers to deliver a stable and highly available reporting platform\n- Coordinate with data scientist to integrate complex data models into the data processing platform.\n- Have fun in a highly dynamic team and drive innovations to continue as a leader in one of the fastest-growing industries\n\nJob Requirements:\n\n- Candidate must possess at least a Bachelorโ€™s Degree in Computer Science, Information System or related discipline. MSc or PhD a plus.\n- Proficiency in Python\n- A strong background in interactive query processing\n- Experience with Big Data applications/solutions such as Hadoop, HBase, Hive, Cassandra, Pig etc. \n- Experience with NoSQL and handling large datasets\n- Passion and interest for all things distributed - file systems, databases and computational frameworks\n- Individual who is passionate, resourceful, self-motivated, highly committed, a team player and able to motivate others\n- Strong leadership qualities\n- Good verbal and written communication.\n- Must be willing in work in highly dynamic and challenging startup environment.\n \n\n#Salary\n30000 - 45000\n \n\n#Equity\n1.0 - 3.0\n \n\n#Location\n- Malaysia

See more jobs at Tapway

Visit Tapway's website

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

Swimlane


Big Data Engineer

Big Data Engineer


Swimlane


engineer

big data

engineer

big data

5yr

Stats (beta): ๐Ÿ‘ 753 views,โœ๏ธ 0 applied (0%)
\nSwimlane is looking for a NoSQL engineer with C# experience to join our team. Our product enables Federal and Fortune 100 companies to do business intelligence on big data and implement workflow procedure tasks around that data.. We are looking for a software engineer to help build the next generation security management application.  This is a new, not legacy, product the technology stack is latest and greatest; you will learn and use groundbreaking technologies!  You will have the ability to work from home, work on open-source projects and have the opportunity write articles on isolated components of your work.

See more jobs at Swimlane

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

ScalingData


Infrastructure Engineer Big Data

Infrastructure Engineer Big Data


ScalingData


engineer

big data

engineer

big data

5yr

Stats (beta): ๐Ÿ‘ 779 views,โœ๏ธ 0 applied (0%)
\nScalingData's build infrastructure engineering team builds and maintains our internal build, test, continuous integration, packaging, release, and software delivery systems and infrastructure. Engineers who are interested in devops, configuration management, build systems, and distributed systems will feel at home thinking about developer efficiency and productivity, simplifying multi-language builds, automated testing of complex distributed systems, and how customers want to consume and deploy complex distributed systems in modern data centers. The build infrastructure team is a critical part of the larger engineering team. Distributed systems are hard, but building the infrastructure to develop them is harder.\n\nBy building on big data technologies such as Hadoop, help us create the essential solution for identify and solving critical performance and compliance issues in data centers. \n\nSome of the technology we use:\n\n\n* Java, Go, C/C++\n\n* Hadoop, Solr, Kafka, Impala, Hive, Spark\n\n* AWS, Maven, Jenkins, Github, JIRA\n\n\n

See more jobs at ScalingData

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

CLARITY SOLUTION GROUP


Big Data

Big Data


CLARITY SOLUTION GROUP


scala

api

engineer

big data

scala

api

engineer

big data

5yr
\nNote: For this position you will be required to travel throughout the country on a weekly basis.\n\nDOES WORKING FOR AN ORGANIZATION WITH THESE BENEFITS, APPEAL TO YOU?\n\n\n* Working on complex transformation programs across many clients and many industries\n\n* Unlimited paid time off\n\n* Competitive compensation which includes uncapped bonus potential based on individual contributions\n\n* Mentor program\n\n* Career development\n\n* Tremendous growth opportunities (company is growing at a rate of 35% or more annually)\n\n* Strong work/life balance\n\n* A smaller nimble organization that is easy to work with\n\n* Visibility to the leadership team on a daily basis\n\n* Be a part of an Elite Data & Analytics team\n\n\n\n\nIF YOU ANSWERED YES TO THE ABOVE ITEMS, KEEP READING!\n\nWe are looking for individuals with the ability to drive the architectural decision making process, who are experienced with leading teams of developers, but who are also capable and enthusiastic about implementing every aspect of an architecture themselves.\n\nOUR DATA ENGINEERS: \n\n\n* Are hands-on, self-directed engineers who enjoysworking in collaborative teams  \n\n* Are data transformation engineers\n\n\n\n* Design and develop highly scalable, end to end process to consume, integrate and analyze large volume, complex data from sources such as Hive, Flume and other APIs\n\n* Integrate datasets and flows using a variety of open source and best-in-class proprietary software\n\n\n\n* Work with business stakeholders and data SMEs to elicit requirements and develop real-time business metrics, analytical products and analytical insights\n\n* Profile and analyze complex and large datasets\n\n* Collaborate and validate implementation with other technical team members\n\n\n

See more jobs at CLARITY SOLUTION GROUP

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.

The Shelf


Big Data Engineer


New York City

Big Data Engineer


The Shelf

New York City

engineer

big data

engineer

big data

New York City5yr

Stats (beta): ๐Ÿ‘ 955 views,โœ๏ธ 0 applied (0%)
Weโ€™re looking for a data nerd with an engineering bent who enjoys incorporating messy and varied data sets into a clean and efficient data analysis pipeline. Wranging with data to find meaningful insights is what drives you to work every day. You are relentless in getting things done. You donโ€™t need parental supervision (i.e., you donโ€™t like to be micro-managed). You want to take ownership of the code/features youโ€™re building. \n\nMust have:\nDeep understanding of CS fundamentals as well as distributed systems \n- At least 5 years of experience building production level software (Python, Django required)\n- At least 2 years in a big-data related role at a data-driven company\n- Continuous integration and deployment experience\n\nExperience: you should have experience fetching, processing, and analyzing data in Python:\n- Experience developing and maintaining the back-end of a data-driven web app\n- Extensive experience with web-scraping (deep knowledge of Selenium a plus)\n- Experience implementing a data collection and analysis pipeline, scaling up to larger data sets and optimizing as necessary\n- Experience working with (non-)relational databases, particularly MongoDB\n- (Not necessary but weโ€™d love you if) Experience with general data mining (NLTK) and machine learning techniques \n- (Not necessary but weโ€™d love you if) Understanding and experience maintaining & optimizing PostgresSQL database is a major plus\n\nOur Stack : \nPython + Django\nAmazon EC2, RDS (Postgres), Rackspace, RabbitMQ for messaging, Celery for queues \n\n#Salary\n70000 - 120000\n \n\n#Equity\n0.5 - 3.0\n \n\n#Location\n- New York City

See more jobs at The Shelf

Visit The Shelf's website

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

๐Ÿ‘‰ Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.