FeedbackIf you find a bug, or have feedback, put it here. Please no job applications in here, click Apply on the job instead.Thanks for the message! We will get back to you soon.

[Spam check] What is the name of Elon Musk's company going to Mars?

Send feedback
Open Startup
RSS
API
Health InsurancePost a job

find a remote job
work from anywhere

Get a  email of all new Remote Big Data + Engineer Jobs

Subscribe
×

👉 Hiring for a Remote Big Data + Engineer position?

Post a job
on the 🏆 #1 Remote Jobs board

Remote Health by SafetyWing


Global health insurance for freelancers & remote workers

HG Insights


dev

 

senior

 

HG Insights is hiring a Remote Senior Software Engineer Big Data

\nAbout Us: \n\nHeadquartered in beautiful Santa Barbara, HG Insights is the global leader in technology intelligence. HG Insights uses advanced data science methodologies to help the world’s largest technology firms and the fastest growing companies accelerate their sales, marketing, and strategy efforts. \n\nWe offer a competitive salary, growth potential, and a casual yet professional environment. Get your sweat on at one of our fitness classes or go for a run along the beach which is two blocks away. You can find employees riding bikes to lunch in the funk zone or hanging out in one of our collaboration spaces. We are passionate about our jobs with a get-it-done attitude, yet we don’t take ourselves too seriously. While we work very hard, we also enjoy all that the Santa Barbara coast has to offer.\n\nWhat You’ll Do: \n\n\n* You will work on our Big Data Insights Platform which is responsible for processing billions of unstructured documents, and a large data lake to extract/syndicate our intelligence for customer consumption.\n\n* You will be working on solving hard data problems using cutting edge technologies.\n\n* What You'll Be Responsible For:\n\n* You will collaborate with Product Development Teams to build the most effective solutions\n\n* You will develop features in our databases, backend apps, front end UI, and Data as a Service (DAAS) product\n\n* You will help architect and design large scale enterprise big-data systems\n\n* You will work on ideas from different team members as well as your own\n\n* Fix bugs rapidly\n\n* Attend daily stand-up meetings, planning sessions, encourage others, and collaborate at a rapid pace\n\n\n\n\nWhat You’ll Need:\n\n\n* BS, MS, or Ph.D. in Computer Science or related technical discipline\n\n* 5+ years of designing and programming in a work setting\n\n* Proficient in Java or Scala (understand and have real-world experience with design patterns)\n\n* Experience as a technical lead and mentor for other engineers\n\n* Understand pragmatic agile development practices\n\n* MySQL, ElasticSearch, Hadoop/Spark, or similar\n\n* Experience with Amazon Web Services (EC2, S3, RDS, EMR, ELB etc.)\n\n* Experience with web services using REST\n\n* Comfortable working with CI/CD and automation environments such as Docker, Kubernetes, Terraform or similar\n\n* Proven track record of successful project delivery\n\n\n\n\nNice-to-haves:\n\n\n* Actual coding experience in large distributed environments with multiple endpoints and complex interactions\n\n* Basic DevOps skills (automate everything, infrastructure as code)\n\n* Loves startup culture where everyone's contributions are felt and loved.\n\n* Self-learner, hacker, technology advocate who can work on anything\n\n* Amazing engineering skills, you’re on your way to being one of the best engineers you know\n\n* You can architect, design, code, test, and mentor others\n\n* Experience working with interesting and successful projects\n\n* Thrive in a fast growing environment\n\n* Excellent written and spoken English communication\n\n\n\n\nDue to Covid-19, We’ve transitioned to a work-from-home model and we’re continuing to interview and hire during this time. This role is expected to begin as a remote position. Normally, you'll be working in Santa Barbara two blocks from the beach!\n\nHG Insights Company is an Equal Opportunity Employer \n\nPlease note that HG Insights does not accept unsolicited resumes from recruiters or employment agencies. In the event of a recruiter or agency submitting a resume or candidate without a signed agreement being in place, we explicitly reserve the right to pursue and hire such candidates without any financial obligation to the recruiter or agency. Any unsolicited resumes, including those submitted directly to hiring managers, are deemed to be the property of HG Insights


See more jobs at HG Insights

This month's Remote Big Data + Engineer Jobs

Nira

 This job is getting a pretty high amount of applications right now (14% of viewers clicked Apply)

dev

 

Nira is hiring a Remote Software Engineer Distributed Systems Big Data 100

\nWhat You’ll Do\n\n\n* Architecture design, API design, data modeling.\n\n* Stream data processing and distributed systems.\n\n* Code standards, code reviews, technical planning/research, testing/QA.\n\n* Investigate and resolve bugs/customer issues.\n\n* Assist in scoping, estimating, and planning of projects.\n\n\n\n\nWho We’re Looking For\n\n\n* You’ve got 5 years of experience with: AWS, Docker, Kubernetes, Elasticsearch, Go, gRPC/Protobuf, Kafka, Microservices, MongoDB, Python\n\n* You have high accountability and ownership of your work.\n\n* You have a bias towards action. You love to move fast, are self motivated, and a life-long learner.\n\n* You care about working on fast-growing products while iterating and sweating the details.\n\n* You’re willing to do whatever it takes, even if this means working outside of your role (backend help frontend, frontend, handle customer support, etc).\n\n* You’re able to effectively balance speed/quality/tech debt and make engineering decisions that enable speed and quality results.\n\n* You’re a product thinker who cares about the customer.\n\n\n


See more jobs at Nira

Previous Remote Big Data + Engineer Jobs

Shopify

 This job is getting a pretty high amount of applications right now (12% of viewers clicked Apply)

verified closed
Canada, United States

senior data engineer

 

data engineering

 

data platform engineering

 

spark

This job post is closed and the position is probably filled. Please do not apply.
**Company Description**\n\nShopify is the leading omni-channel commerce platform. Merchants use Shopify to design, set up, and manage their stores across multiple sales channels, including mobile, web, social media, marketplaces, brick-and-mortar locations, and pop-up shops. The platform also provides merchants with a powerful back-office and a single view of their business, from payments to shipping. The Shopify platform was engineered for reliability and scale, making enterprise-level technology available to businesses of all sizes. \n\n**Job Description**\n\nOur Data Platform Engineering group builds and maintains the platform that delivers accessible data to power decision-making at Shopify for over a million merchants. We’re hiring high-impact developers across teams:\n\n* The Engine group organizes all merchant and Shopify data into our data lake in highly-optimized formats for fast query processing, and maintaining the security + quality of our datasets.\n* The Analytics group builds products that leverage the Engine primitives to deliver simple and useful products that power scalable transformation of data at Shopify in batch, or streaming, or for machine learning. This group is focused on making it really simple for our users to answer three questions: What happened in the past? What is happening now? And, what will happen in the future? \n* The Data Experiences group builds end-user experiences for experimentation, data discovery, and business intelligence reporting.\n* The Reliability group operates the data platform efficiently in a consistent and reliable manner. They build tools for other teams at Data Platform to leverage to encourage consistency and they champion reliability across the platform.\n\n**Qualifications**\n\nWhile our teams value specialized skills, they've also got a lot in common. We're looking for a(n): \n\n* High-energy self-starter with experience and passion for data and big data scale processing. You enjoy working in fast-paced environments and love making an impact. \n* Exceptional communicator with the ability to translate technical concepts into easy to understand language for our stakeholders. \n* Excitement for working with a remote team; you value collaborating on problems, asking questions, delivering feedback, and supporting others in their goals whether they are in your vicinity or entire cities apart.\n* Solid software engineer: experienced in building and maintaining systems at scale.\n\n**A Senior Data Developer at Shopify typically has 4-6 years of experience in one or more of the following areas:**\n\n* Working with the internals of a distributed compute engine (Spark, Presto, DBT, or Flink/Beam)\n* Query optimization, resource allocation and management, and data lake performance (Presto, SQL) \n* Cloud infrastructure (Google Cloud, Kubernetes, Terraform)\n* Security products and methods (Apache Ranger, Apache Knox, OAuth, IAM, Kerberos)\n* Deploying and scaling ML solutions using open-source frameworks (MLFlow, TFX, H2O, etc.)\n* Building full-stack applications (Ruby/Rails, React, TypeScript)\n* Background and practical experience in statistics and/or computational mathematics (Bayesian and Frequentist approaches, NumPy, PyMC3, etc.)\n* Modern Big-Data storage technologies (Iceberg, Hudi, Delta)\n\n**Additional information**\n\nAt Shopify, we are committed to building and fostering an environment where our employees feel included, valued, and heard. Our belief is that a strong commitment to diversity and inclusion enables us to truly make commerce better for everyone. We strongly encourage applications from Indigenous people, racialized people, people with disabilities, people from gender and sexually diverse communities and/or people with intersectional identities.\n\n\n\n#Location\nCanada, United States


See more jobs at Shopify

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
**Working at Clevertech**\n\nPeople do their best work when they’re cared for and in the right environment:\n* RemoteNative™: Pioneers in the industry, we are committed to remote work.\n* Flexibility: Wherever your are, and wherever you want to go. We embrace the freedom gained through trust and professionalism.\n* Team: Be part of an amazing team of senior engineers that you can rely on.\n* Growth: Become a master in the art of remote work and effective communication.\n* Compensation: Best in class compensation for remote workers plus the swag you want.\n* Cutting Edge: Stay sharp in your space, work at the very edge of tech.\n* Passion: Annual financial allowance for YOUR development and YOUR passions.\n\n**The Job**\n\n* 7+ years of professional experience (A technical assessment will be required)\n* Senior-level experience with Data Warehousing, Data Engineering\n* Experience evaluating and selecting a Cloud Data Warehouse - Redshift, Snowflake, or Synapse\n* You have accomplishments with data modeling, schemas, and ETL development to support business processes, ideally in the the construction industry\n* Clear communicator with expertise in management level presentation and documentation\n* English fluency, verbal and written\n* Professional, empathic, team player\n* Problem solver, proactive, go getter\n\n**Life at Clevertech**\n\nWe’re Clevertech, since 2000, we have been building technology through empowered individuals. As a team, we challenge in order to be of service, to deliver growth and drive business for our clients.\n\nOur team is made up of people that are not only from different countries, but also from diverse backgrounds and disciplines. A coordinated team of individuals that care, take on responsibility, and drive change.\n\nhttps://youtu.be/1OKhKatReyg\n\n**Getting Hired**\n\nInterested in exploring your future in this role and Clevertech? Set yourself up for success and take a look at our [Interview Process](https://www.clevertech.biz/thoughts/interviewing-with-clevertech) before getting started! \n\n#Location\nUS, Canada, Europe


See more jobs at Clevertech

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply.
**Working at Clevertech**\n\nPeople do their best work when they’re cared for and in the right environment:\n* RemoteNative™: Pioneers in the industry, we are committed to remote work.\n* Flexibility: Wherever your are, and wherever you want to go. We embrace the freedom gained through trust and professionalism.\n* Team: Be part of an amazing team of senior engineers that you can rely on.\n* Growth: Become a master in the art of remote work and effective communication.\n* Compensation: Best in class compensation for remote workers plus the swag you want.\n* Cutting Edge: Stay sharp in your space, work at the very edge of tech.\n* Passion: Annual financial allowance for YOUR development and YOUR passions.\n\n**The Job**\n\n* 7+ years of professional experience (A technical assessment will be required)\n* Senior-level experience developing and operating large-scale data pipelines with Apache Spark\n* Experience with Spark, Beam, Druid, Ignite\n* Experience with reporting, large datasets, and complex queries.\n* English fluency, verbal and written\n* Professional, empathic, team player\n* Problem solver, proactive, go getter\n\n**Life at Clevertech**\n\nWe’re Clevertech, since 2000, we have been building technology through empowered individuals. As a team, we challenge in order to be of service, to deliver growth and drive business for our clients.\nOur team is made up of people that are not only from different countries, but also from diverse backgrounds and disciplines. A coordinated team of individuals that care, take on responsibility, and drive change.\nhttps://youtu.be/1OKhKatReyg\n\n**Getting Hired**\n\nInterested in exploring your future in this role and Clevertech? Set yourself up for success and take a look at our [Interview Process](https://www.clevertech.biz/thoughts/interviewing-with-clevertech) before getting started! \n\n#Location\nUS, Canada, Europe


See more jobs at Clevertech

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.

Andalus


closed
🌏 Worldwide

kubernetes

 

spark

 

infrastructure

 
This job post is closed and the position is probably filled. Please do not apply.
Andalus is a start-up aiming to utilize the latest technologies to help clients solve their data needs. We are hackers by nature and like to think of innovative solutions to solving issues that have long been considered status-quo. We aim to empower our clients with state-of-art infrastructure components allowing them to be a data-enabled organization.\n\nYou will be working first-hand on developing the first major release of our Andalus platform targeted to empower important organizations in the MENA region.\n\n\n**Responsibilities:\n**\n* Design, build, and maintain the core data computation infrastructure used by Andalus\n* Debug issues across services and levels of the stack\n* Think and suggest the right hardware specs and components that would run well with the developed software and adhere to clients’ requirements \n* Develop systems that proactively capture the health status of various software and hardware components.\n* Build a great customer experience for people using your infrastructure\n\n\n**Qualifications:\n**\n* Familiar with the latest computation and storage architecture and components\n* Experience with managing distributed systems within computing clusters\n* Experience in configuration and maintenance of applications such as web servers, load balancers, relational databases, storage systems and messaging systems.\n* Knowledge of system design concepts\n* Ability to debug complex problems across the whole stack\n* Demonstrated understanding of container networking and security\n* Comfort working with network protocols, proxies and load balancers\n* Experience building highly available services \n* Experience with Kubernetes or other container orchestration systems\n* Technical writing skills\n* Interest in or experience with systems languages, such as Go\n* Strong communication skills and willingness to engage with your teammates in group problem-solving in a remote-work environment\n\n\n**Preferred Qualifications:\n**\n* Knowledge of data governance tooling and principles\n* Working experience in organizations with large databases\n* Dealt with data integrity and cleaning issues in the past\n* 2+ years of experience as a tech lead\n\n\n#Location\n🌏 Worldwide


See more jobs at Andalus

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Vendrive


verified closed
🇺🇸 US-only

go

 

golang

 

postgresql

 

data science

This job post is closed and the position is probably filled. Please do not apply.
# Description\nOur mission is to standardize the B2B supply chain to allow for the more efficient operation of retailers, distributors and manufacturers alike. We've started with the automation and streamlining of day-to-day operations carried out by online retailers. Our price management software, Aura, proudly supports over 1,000 Amazon merchants and processes over 750-million price changes on the Amazon marketplace each month.\n\nOur profitable bootstrapped company was founded in 2017 by a pair of Amazon sellers with a background in engineering. We're a team of die-hard nerds obsessed with big data and automation.\n\nWe're looking for a **Backend Software Engineer** with experience in distributed systems and an entrepreneurial mindset to join us.\n\nOur growing team is remote-first, so it's important that you're able to communicate clearly and effectively in such an environment. We meet regularly, so we require that prospective team members' timezones are in alignment with ours (UTC-10 to UTC-4).\n\n# Responsibilities\n * Design and implement core backend microservices in Go\n* Design efficient SQL queries\n* Follow test-driven development practices\n* Conduct design and code reviews\n* Participate in daily standups and weekly all-hands meetings\n# Our Stack\nOur backend follows an event-driven microservice architecture. Here are some of the technologies you'll be using:\n\n* Golang\n* PostgreSQL\n* Redis, Elastisearch\n* Several 3rd Party APIs\n# Benefits\n* Competitive salary\n* Fully remote position\n* Company sponsored health, vision and dental insurance\n* Flexible vacation policy\n* Equity in a profitable company\n* Bi-annual company retreats in locations around the world\n* Startup culture where you're encouraged to experiment \n\n# Requirements\n* B.S. in Computer Science or relevant field\n* Strong problem-solving and communication skills\n* Experience with relational databases (PostgreSQL) and ability to analyze and write efficient queries\n* Experience with Go in production-grade environments\n* Experience building REST APIs\n* Working knowledge of Git\n# Preferred Qualifications\n* Experience building distributed systems, event-driven microservice architecture, CQRS pattern\n* Previous remote work experience\n* Experience integrating with Amazon MWS (Marketplace Web Service)\n* Experience with Redshift, writing performant analytical queries\n* Experience collaborating via Git\n* Hands-on experience with highly concurrent production grade systems\n* Understanding of DevOps, CI/CD\n* Experience using AWS services (EC2, ECS, RDS, Redshift, SNS, SQS, etc.)\n* Understanding of the key metrics which drive a startup SaaS business (MRR, LTV, CAC, Churn, etc.)\n\n#Location\n🇺🇸 US-only


See more jobs at Vendrive

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

X-Mode Social


verified closed
🇺🇸 US-only

scala

 

spark

 

dctech

 

aws

This job post is closed and the position is probably filled. Please do not apply.
X-Mode provides real-time location data and technologies that power location intelligence for advertising and business decisions in financial services, healthcare, high-tech, real-estate, retail, and the public sector. X-Mode's flagship product is a fast-growing big data location platform, which maps daily the precise routes of 10% of the U.S. Population and maps monthly 1 in 3 adult U.S. smartphone users. X-Mode strives to produce and monetize the world’s largest location platform and ultimately create a global “living map” of 1 billion people with the highest quality location data in order to fuel the best location intelligence business solutions.\n\nX-Mode Social, Inc. is looking for a full-time Back-End Engineer to join our back-end team. For this position, you can work in either our Reston, VA headquarters or remotely. Our technical staff is scattered across the U.S, so you'll need to be comfortable working remotely. We often use video conferencing tools (like Slack, Google Meet) to coordinate, as well as Jira for tasking, and Bitbucket for source control. We work in short sprints, and we'll count on you to provide estimates for tasks to be completed and delivered. We're looking to hire someone to start right away! Think you've got what it takes? Apply below!\n\n# Responsibilities\n * Use big data technologies, processing frameworks, and platforms to solve complex problems related to location\n* Build, improve, and maintain data pipelines that ingest billions of data points on a daily basis\n* Efficiently query data and provide data sets to help Sales and Client Success teams' with any data evaluation requests\n* Ensure high data quality through analysis, testing, and usage of machine learning algorithms \n\n# Requirements\n* BS in Computer Science preferred or relevant industry experience\n* 1+ years of Spark and Scala experience\n* Self-directed and self-motivated individuals comfortable working on a remote team \n* Strong curiosity about new technologies and a desire to always use the best tools for the job\n* Experience working with very large databases and batch processing datasets with hundreds of millions of records\n* Experience with Hadoop ecosystem, e.g. Spark, Hive, or Presto/Athena\n* Real-time streaming with Kinesis, Kafka or similar libraries\n* Experience with SQL based data architectures\n* 2+ years Linux experience\n* 2 years working with cloud services, ideally in AWS\n* Self-motivated learner who is willing to self-teach and can maintain a team-centered outlook\n* BONUS: Experience with Python, Machine Learning, and Apache Elasticsearch or Apache Solr\n* BONUS: GIS/Geospatial tools/analysis and any past experience with geolocation data\n\n#Location\n🇺🇸 US-only


See more jobs at X-Mode Social

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
SmileDirectClub is looking for an experienced Data Engineer to help design and scale our data pipelines to help our engineers, operations team, marketing managers, and analysts make better decisions with data. We are looking for engineers that understand that simplicity and reliability are aspects of a system that can’t be tacked on but are carefully calculated with every decision made. If you have experience working on ETL pipelines and love thinking about how data models and schemas should be architected, we want to hear from you.\n\nSmileDirectClub was founded on a simple belief: everyone deserves a smile they love. We are the first digital brand for your smile. The company was built upon a realization that recent trends in 3D printing and telehealth could bring about disruptive change to the invisible aligner market. By leveraging proprietary cutting-edge technology, we’re helping customers avoid office visits and cutting their costs by up to 70 percent because people shouldn’t have to pay a small fortune for a better smile.\n\nYou will:\n\nDesign and build new dimensional data models and schema designs to improve accessibility, efficiency, and quality of internal analytics data\nBuild, monitor, and maintain analytics data ETL pipelines\nImplement systems for tracking data quality and consistency\nWork closely with Analytics, Marketing, Finance, and Operations teams to understand data and analysis requirements\nWork with teams to continue to evolve data models and data flows to enable analytics for decision making (e.g., improve instrumentation, optimize logging, etc.)\nWe’re looking for someone who:\n\nHas a curiosity about how things work\nIs willing to role-up their sleeves to leverage Big Data and discover new key performance indicators\nHas built enterprise data pipelines and can craft clean and beautiful code in SQL, Python, and/or R\nHas built batch data pipelines with Hadoop or Spark as well as with relational database engines, and understands their respective strengths and weaknesses\nHas experience with ETL jobs, metrics, alerting, and/or logging\nHas expert knowledge of query optimization in MPP data warehouses (Redshift, Snowflake, Cloudera, HortonWorks, MapR, or similar)\nExperience in the latest/cutting edge design and development of big data solutions\nProficiency in the latest trends in big data analytics and architecture\nCan jump into situations with few guardrails and make things better\nPossesses strong computer science fundamentals: data structures, algorithms, programming languages, distributed systems, and information retrieval\nIs a strong communicator. Explaining complex technical concepts to product managers, support, and other engineers is no problem for you\nWhen things break, and they will, is eager and able to help fix them\nIs someone that others enjoy working with due to your technical competence and positive attitude\nIs ready to design and create ROLAP, MOLAP, and RDBMS data stores\nHow to stand out against the rest:\n\nAcademic background in computer science or mathematics (BSc or MSc), or demonstrated industry hands-on experience\nExperience with agile development processes\nExperience building simple scripts and web applications using Python, Ruby, or PHP\nA solid grasp of basic statistics (regression, hypothesis testing)\nExperience in small start-up environments\nBenefits:\n\nCompetitive salary\nHealth, vision and dental insurance\n401K plan\nPTO\nDiscounted SmileDirectClub aligner treatment\n \n\nAbout SmileDirectClub:\n\nSmileDirectClub is backed by Camelot Venture Group, a private investment group that has been pioneering the direct-to-consumer industry since the early ‘90s, particularly in highly regulated industries. If you’ve heard of 1-800-CONTACTS, Quicken Loans, HearingPlanet, DiabetesCareClub or SongbirdHearing, then you’ve heard of Camelot. Their hands-on approach, extensive networking, and operational expertise ensures their portfolio companies reach their potential.\n\nHaving closed on a $46.7 million capital raise in July 2016 led by Align Technology (NASDAQ: ALGN), owner of the Invisalign® brand, SmileDirectClub is now valued at $275 million and is continuing to grow share in the U.S. orthodontics market.


See more jobs at SmileDirectClub

Visit SmileDirectClub's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
The Big Data Services engineering team is responsible for providing software tools, platforms and APIs for collecting and processing large datasets, complete with search, analytics & real-time pipeline processing capabilities to address the unique challenges of our industry. We are building large distributed systems that will be the heart of data architecture to serve billions of requests, provide search & analytics across structured, semi-structured datasets, and scale out to tens of terabytes while maintaining low latency & availability & immediate discoverability by clients. We are reimagining the way we architect our data infrastructure across the company and are looking for an experienced software engineer to help. If solving intricate engineering issues with distributed systems, platform API, real-time big-data pipeline and search & discovery query patterns are your calling, we would like to hear from you.\n\nMajor Responsibilities:\n-Develop and maintain internal Big Data services and tools\n-Leverage Service Oriented Architecture to create API’s, libraries and frameworks that our Studios will use\n-Help building the real-time Data Platform to support our games\n-Design & build data processing architecture in AWS\n-Design, support and build data pipelines\n-Develop ETL in distributed processing environment\n\nWhat You Need for this Position:\n-Bachelor's degree in technical field (e.g., MIS, Computer Science, Engineering, or a related field of study)\n-The ideal candidate should have full-stack experience, as you’ll be delivering data and analytics solutions for business, analytics and technology groups across the organization\n-Minimum of 3 years of demonstrated experience with object-oriented programming (Java)\n-Working knowledge of Python\n-Experience in Go (Golang) is a huge plus\n-Advanced skills in Linux shell and SQL are required\n-Background with databases\n-Experience in Data Modeling/Integration and designing REST based API's for consumer based services is a plus\n-Good knowledge of open source technologies and DevOps paradigm


See more jobs at Peak Games

Visit Peak Games's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
This job post is closed and the position is probably filled. Please do not apply.
#ABOUT US\nWe're a London based startup that is building an economy around people's data and attention. In short, we’re creating a digital marketplace where consumers can dynamically license their personal data and attention to brands in return for a payment.\n\nOur tech stack currently includes: Node (Heroku), ReactJS and AngularJS (Firebase), Express, Mongoose, SuperTest, MongoDB (MongoLab), npm (npmjs). Our distributed development team covers the development of the responsive web, mobile and browser extension products. \n\nWe've recently completed the functional MVP and will be pushing on towards our closed-beta launch at the end of January.\n\n#ABOUT YOU\nWe're looking for a freelance dev-ops person who has significant experience configuring, managing, and monitoring servers and backend services at scale to support our core development team.\n\n\n#COME HELP US WITH PROJECTS LIKE...\n- Review our platform architecture requirements and deploy a well documented, secure and scalable cloud based solution\n- Tighten up security of our servers\n- Setup autoscaling of our workers\n- Make our deployments faster and safer\n- Scale our MongoDB clusters to support our growing data sizes\n- Improve API performance\n- Automate more processes\n- Make sure our backup and recovery procedures are well tested\n- Implement a centralized logging system\n- Instrument our application with more metrics and create dashboards\n- Remove single points of failure in our architecture\n\n\n#YOU SHOULD...\n- Have real world experience building scalable systems, working with large data sets, and troubleshooting various back-end challenges under pressure\n- Experience configuring monitoring, logging, and other tools to provide visibility and actionable alerts\n- Understand the full web stack, networking, and low level Unix computing\n- Always be thinking of ways improve reliability, performance, and scalability of an infrastructure\n- Be self-motivated and comfortable with responsibility\n\n\n#WHY WORK WITH US?\n\nWork remotely from anywhere in the world, or from our HQ in London, UK. Just be willing to do a bit of traveling every quarter for some face-to-face time with the whole team.\nBe involved in an early-stage, fast growth startup that has already received national press coverage\n\n\nExtra tags: Devops, AppSec, NodeJS, Cloud, Mongodb, API, Sys Admin, Engineer, Backend, Freelance, Consultant, security, big data, startup


See more jobs at C8

Visit C8's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

very large


closed

python

 

java

 

golang

This job post is closed and the position is probably filled. Please do not apply.
\nNEEDED FOR THIS ROLE\n\n\n* JAVA, PYTHON, Golang (intermediate+ to Expert -preferred) in one or more\n\n* NoSQL DB (Cassandra, etc or time series non structured DB experience)\n\n* Big Data and Data at very large scale\n\n* Experienced battle-hardened SW engineer (large distributed systems, large scale)\n\n\n\n\nThis is NOT an SRE role! \n\nThis is a software engineering role that will work on a team that provides ALL monitoring and will be responsible for developing custom stack for data integration retrieval. The team monitors time series data ingest in upwards of 1.5M+ records a min. \n\nMUST HAVE\n\n\n* Have the ability to develop code to access resident data and then digest and correlate data.\n\n* Experienced battle hardened SW engineer with distributed systems experience deploying large scale/implementing at large scale.\n\n* Solid programmer -knows one or more (Java, Python, Golang) and expert at one or more.\n\n\n\n\nTHEY ARE NOT looking for script writer\n\nIdeal candidate has experience with timeseries data store (e.g. Cassandra, etc.)\n\n\n* Expertise in NoSQL DB at a GIGA scale\n\n\n\n\n\n The SRE Monitoring Infrastructure team (Note this is NOT an SRE Role) is looking for a backend  software engineer with experience working with large-scale systems and an operational mindset to help scale our operational metrics platform. This is a fantastic opportunity to enable all engineers to monitor and keep our site up and running. In return, you will get to work with a world class team supporting a platform that serves Billions of metrics at Millions of QPS\n \nThe engineers  fill the mission-critical role of ensuring that our complex, web-scale systems are healthy, monitored, automated, and designed to scale. You will use your background as an operations generalist to work closely with our development teams from the early stages of design all the way through identifying and resolving production issues. The ideal candidate will be passionate about an operations role that involves deep knowledge of both the application and the product, and will also believe that automation is a key component to operating large-scale systems.\n \n Responsibilities:\n • Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services\n • Gain deep knowledge of our complex applications.\n • Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth.\n • Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment.\n • Work closely with development teams to ensure that platforms are designed with "operability" in mind.\n • Function well in a fast-paced, rapidly-changing environment.\n • Participate in a 24x7 rotation for second-tier escalations.\n \n Basic Qualifications:\n • B.S. or higher in Computer Science or other technical discipline, or related practical experience.\n • UNIX/Linux systems administration background.\n • Programming skills (Golang, Python)\n \n Preferred Qualifications:\n • 5+ years in a UNIX-based large-scale web operations role.\n • Golang and/or Python experience\n • Previous experience working with geographically-distributed coworkers.\n • Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, Engineers, Product Managers, etc.\n • Knowledge of most of these: data structures, relational and non-relational databases, networking, Linux internals, filesystems, web architecture, and related topics- basic knowledge\n\nTeam\n\n\n* Interact with 4-5 people (stand ups) but not true scrum\n\n* No interaction with outside teams\n\n\n\n\nCandidate workflow\n\n\n* 2 rounds\n\n* 1 technical coding\n\n* 1 team fit\n\n\n


See more jobs at very large

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Empirico

 This job is getting a pretty high amount of applications right now (12% of viewers clicked Apply)

closed
This job post is closed and the position is probably filled. Please do not apply.
\nEmpirico, an early-stage biotechnology company, is looking for a talented software engineer that is motivated by the opportunity to build scalable data systems that power the discovery of new medicines. You will work closely with a team of engineers and computational scientists to build and extend Empirico’s data infrastructure, which include modern cloud-based systems and services that operate on some of the largest biological datasets in the world.\n\nResponsibilities:\n\nYour responsibilities will focus around designing and implementing robust and extensible data systems. You will be expected to:\n\n\n* \n\nDesign and implement scalable data infrastructure and pipelines\n\n\n* \n\nImplement scalable algorithms in a distributed systems setting\n\n\n* \n\nCollaborate closely with an interdisciplinary team of scientists and engineers to address\n\nsystem pain points\n\n\n* \n\nImprove developer efficiency and system quality through emphasis on elegant code\n\n\n* \n\nAdvocate for systems and engineering practice improvements\n\n\n\n\n\nRequirements:\n\n\n* \n\n2+ years professional experience designing and developing software on modern distributed data systems\n\n\n* \n\nExperience processing and analyzing large and heterogeneous datasets\n\n\n* \n\nStrong technical skill set that spans a broad range of technologies, programming languages,\n\nand paradigms\n\n\n* \n\nPassionate about systems thinking and drive towards elegant and automated solutions to\n\ndata problems\n\n\n* \n\nExperience with Spark and Scala or other functional programming language is a plus\n\n\n* \n\nApplicants must have authorization to work in the United States\n\n\n\n


See more jobs at Empirico

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Kubevisor


closed

senior

 
This job post is closed and the position is probably filled. Please do not apply.
\nDescription\n\nA successful candidate will be a person who enjoys diving deep into data, doing analysis, discovering root causes, and designing long-term solutions. It will be a person that wants to innovate in the world of cloud big data engineering and machine learning. Major responsibilities include:\n\n\n* Understand the client's business need and guide them to a solution using Apache Spark, Hadoop, Kubernetes, AWS etc.\n\n* Lead the customer projects by being able to deliver a Spark project on AWS from beginning to end, including understanding the business need, aggregating data, exploring data and deploying to AWS (EMR, S3, Step Functions etc.) to deliver business impact to the organization.\n\n\n\n\n \nBasic Qualifications\n\n\n\n\n* Degree in computer science or a similar field\n\n* 5+ years work experience in big data engineering\n\n* Experience in managing and processing data in a data lake\n\n* Able to use a major programming, preferably Scala/Python (preference in that order) on Spark to process data at a massive scale\n\n* Experience working with a wide range of big data tools, especially Spark on AWS\n\n\n\n\n\nPreferred Qualifications \n\n\n\n\n* Experience with processing large datasets with Spark on AWS\n\n* Experience in data modelling, ETL development, and data warehousing.\n\n* Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets\n\n* Experience developing software projects\n\n* Experience using Linux to process large data sets\n\n* Combination of deep technical skills and business savvy enough to interface with all levels and disciplines within our client's organization\n\n* Demonstrable track record of dealing well with ambiguity, prioritizing needs, and delivering results in a dynamic environment\n\n\n\n\nIdeally you are in the GMT to GMT+4 timezone.


See more jobs at Kubevisor

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

BairesDev


closed
This job post is closed and the position is probably filled. Please do not apply.
\nAt BairesDev (Glassdoor Employee Score: 4.3), we are proud of being one of the fastest-growing companies in the industry because we don't sacrifice quality. With more than 1300 collaborators, and providing talent to companies such as Google, Pinterest and Udemy, we continue to rapidly add talent to our multicultural team who will help us get to the next level.\n\n\nBig Data Engineers will face numerous business-impacting challenges, so they must be ready to use state of the art technologies and be familiar with different IT domains such as Machine Learning, Data Analysis, Mobile, Web, IoT, etc. They are passionate, active members of our community who enjoy sharing knowledge, challenging, and being challenged by others and are truly committed to improving themselves and those around them.\n\n\nMain Activities:\n\n- Work alongside Developers, Tech Leads, and Architects to build solutions that transform users’ experience.\n- Impact the core of business by improving existing architecture or creating new ones.\n- Create scalable and high availability solutions, contribute to the key differential of each client.\n\n\nWhat Are We Looking For:\n\n- 6+ years of experience working as a Developer (Ruby, Python, Java, JS, preferred).\n- 5+ years of experience in Big Data (Comfortable with enterprise Big Data topics such as Governance, Metadata Management, Data Lineage, Impact Analysis, and Policy Enforcement).\n- Proficient in analysis, troubleshooting, and problem-solving.\n- Experience building data pipelines to handle large volumes of data (either leveraging well-known tools or custom made ones).\n- Advanced English is mandatory.\n \nProficiency in the following topics are highly appreciated:\n\n- Building Data Lakes with Lambda/Kappa/Delta architecture.\n- DataOps, particularly creating and managing processes for batch and real-time data ingestion and processing.\n- Hands-on experience with managing data loads and data quality.\n- Modernizing enterprise data warehouses and business intelligence environments with open source tools.\n- Deploying Big Data solutions to the cloud (Cloudera, AWS, GCP, or Azure).\n- Performing real-time data visualization and time series analysis using both open source and commercial solutions.\n\n\nWe offer:\n\n- 100% remote / work-from-home flexible schedules.\n- Excellent compensation.\n- Multiple opportunities to learn and grow in a people-first environment.\n- Warming company culture.\n- Clients interested in what you have to say, eager to hear your opinions and mostly in working together towards building something great.\n\n\nApply now and become part of this fantastic startup. At BairesDev, remote work is at our core. Enjoy the opportunity to have a dynamic lifestyle, better health, and wellness. Find renewed passion in your job, improve your productivity, and benefit from attractive growth opportunities for your career.


See more jobs at BairesDev

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

VividCortex Database Performance Monitoring


closed

senior

 
This job post is closed and the position is probably filled. Please do not apply.
*Only candidates residing inside of the United States will be considered for this role*\n\nAbout VividCortex\n\nAre you excited by designing and developing a high volume, highly available, and highly scalable SaaS product in AWS that supports Fortune 500 companies? Do you love the energy and excitement of being a part of growing and successful startup company? Are you passionate about diving deep into technologies such as microservices/APIs, database design, and data architecture?\n\nVividCortex, founded in 2012, is building a world-class company with a mixed discipline team to provide incredible value for our customers. Hundreds of industry leaders like GitHub, SendGrid, Etsy, Yelp, Shopify, and DraftKings rely on VividCortex. Our company’s growth continues to accelerate (#673 Inc. 5000) for yet another year so we need your help.\n\nWe are extremely customer focused, engaged in building an authentic, low-drama team that is open, candid, sincerely practicing ‘disagree and commit’, constantly learning and improving, and with a focused, get-it-done attitude about our commitments.\n\nA successful candidate thrives in a highly collaborative and fast-paced environment. We expect and encourage innovation, responsibility, and accountability from our team members and expect you to make substantial contributions to the architectural and technical direction of both the product and company.\n\nAbout the Role\nVividCortex needs an experienced and senior hands-on data and software engineer who has “been there and done that” to help take our company to the next level. We are designing and building our next-generation system for continuous high volume data storage, analysis, and presentation. You are hands-on and working at the intersection of data, engineering, and product. You are key in defining the strategy and tactics of how we store and process massive amounts of performance metrics and other data we capture from our customers' database servers.\n\nOur platform is written in Go and hosted entirely on the AWS cloud. It currently uses Kafka, Redis, and MySQL technologies among others. We are a DevOps organization building a 12-factor microservices application; we practice small, fast cycles of rapid improvement and full exposure to the entire infrastructure, but we don't take anything to extremes.\n\nThe position offers excellent benefits, a competitive base salary, and the opportunity for equity. Diversity is important to us, and we welcome and encourage applicants from all walks of life and all backgrounds.\n\n\n\n\nWhat You Will Be Doing\n\n\n\n\n* Discover, define, design, document and assist in developing scalable backend storage and robust data pipelines for different types of data streams of both structured and unstructured data in an AWS environment and based on Linux and Golang\n\n* Work with others to define, and propose for approval, a modern data platform design strategy and matching architecture and technology choices to support it, with the goal of providing a highly scalable, economical, observable, and operable data platform for storing and processing very large amounts of data within tight performance tolerances.\n\n* Perform high-level strategy and hands-on infrastructure development for the VividCortex data platform, developing and deploying new data management services in AWS.\n\n* Collaborate with engineering management to drive data systems design, deployment strategies, scalability, infrastructure efficiency, monitoring, and security.\n\n* Write code, tests, and deployment manifests and artifacts.\n\n* Work with CircleCI and GitHub in a Linux environment.\n\n* Issue pull requests, create issues, and participate in code reviews and approval.\n\n* Continually seek to understand, measure, and improve performance, reliability, resilience, scalability, and automation of the system. Our goal is that systems should scale linearly with customer growth, and the effort of maintaining the systems should scale sub-linearly.\n\n* Support product management in prioritizing and coordinating work on changes and serve as a lead in creating user-focused technical requirements and analysis\n\n* Assist with customer support, sales, and other activities as needed. \n\n* Understand and enact our security posture and practices.\n\n* Rotate through on-call duty.\n\n* Contribute to a culture of continuous learning and clear responsibility and accountability.\n\n* Manage your workload, collaborating and working independently as needed, keeping management appropriately informed of progress and issues.\n\n\n\n\n\n\n\n\n\n\nBasic Qualifications:\n\n\n\n\n* Experience developing and extending a SaaS multi-tenant application\n\n* Domain expert in scalable, highly available data storage, scaling, organization, formats, security, reliability, etc.\n\n* Capable of deep technical understanding and discussion of databases, software and service design, systems, and storage\n\n* 10+ years of experience in distributed software systems design and development\n\n* 7+ years of experience programming in Golang, Java, C#, or C\n\n* 7+ years of experience designing and hands-on implementation and maintenance of data pipelines at big data scale, employing a wide variety of big data technologies, as well as cleaning and organizing data to be reliable and usable\n\n* Experience designing highly complex data infrastructures and maintenance of same\n\n* Mastery of relational database concepts including a strong knowledge of SQL and of technologies such as MySQL, Postgres\n\n* Experience with CI/CD, Git, and development in a Unix/Linux environment using the command line\n\n* Excellent written and verbal communication skills\n\n* Ability to understand and translate customer needs into leading-edge technology\n\n* Collaborative with a passion for highly effective teams and development processes\n\n\n\n\n\n\n\n\n\n\nPreferred Qualifications:\n\n\n\n\n* Master’s degree in Computer Science or equivalent work experience\n\n* Experience designing and deploying solutions with no-SQL technologies such as Mongo, DynamoDB\n\n* 3+ years of experience with AWS infrastructure development including experience with a variety of different ingestion technologies,, processing frameworks, storage engines and understand the tradeoffs between them\n\n* Experience with Linux systems administration and enterprise security\n\n\n\n\n


See more jobs at VividCortex Database Performance Monitoring

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Infiot


closed
This job post is closed and the position is probably filled. Please do not apply.
\nWe are looking for a Software Engineer to work with us primarily in the data ingestion pipelines and analytics databases of our cloud platform. The qualified candidate will join a team of full stack engineers who work in front end, back end and devops initiatives needed to accomplish the Infiot vision. \n\nThe ideal candidate would have most of the following qualifications. We will consider candidates who have some of these qualifications but are interested in working on this skillset.\n\n\n* Experience with scalable cloud native multi-tenant architectures\n\n* Fluency in Java\n\n* Experience with HTTP based APIs (REST or GraphQL)\n\n* Experience in big data frameworks (Apache Beam, Spark, etc)\n\n* Experience in SQL and NoSQL databases with focus on scale\n\n* Some Linux command line, python and make\n\n* Ability to work in a team setting\n\n* Passion for automation\n\n* Passion for personal productivity improvement\n\n* Passion for quality and customer satisfaction\n\n* Passion for development driven testing\n\n* MS/PhD in Computer Science or equivalent knowledge/experience\n\n\n


See more jobs at Infiot

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Source Coders


closed

dev

 

senior

 
This job post is closed and the position is probably filled. Please do not apply.
About you:\n\n\n* Care deeply about democratizing access to data.  \n\n* Passionate about big data and are excited by seemingly-impossible challenges.\n\n* At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.\n\n* You think life is too short to work with B-players.\n\n* You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.\n\n* You live in the U.S. or Canada and are comfortable working remotely.\n\n\n\nAbout SafeGraph: \n\n\n\n* SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. \n\n* SafeGraph's goal is to be the place for all information about physical Places\n\n* SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).\n\n* Company is growing fast, over $10M ARR, and is currently profitable. \n\n* Company is based in San Francisco but about 50% of the team is remote (all in the U.S.). We get the entire company together in the same place every month.\n\n\n\n\nAbout the role:\n\n\n* Core software engineer.\n\n* Reporting to SafeGraph's CTO.\n\n* Work as an individual contributor.  \n\n* Opportunities for future leadership.\n\n\n\n\nRequirements:\n\n\n* You have at least 6 years of relevant work experience.\n\n* Proficiency writing production-quality code, preferably in Scala, Java, or Python.\n\n* Strong familiarity with map/reduce programming models.\n\n* Deep understanding of all things “database” - schema design, optimization, scalability, etc.\n\n* You are authorized to work in the U.S.\n\n* Excellent communication skills.\n\n* You are amazingly entrepreneurial.\n\n* You want to help build a massive company. \n\n\n\nNice to haves:\n\n\n* Experience using Apache Spark to solve production-scale problems.\n\n* Experience with AWS.\n\n* Experience with building ML models from the ground up.\n\n* Experience working with huge data sets.\n\n* Python, Database and Systems Design, Scala, Data Science, Apache Spark, Hadoop MapReduce.\n\n\n


See more jobs at Source Coders

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Pixalate


closed
This job post is closed and the position is probably filled. Please do not apply.
\nWho are we?\n\n\nPixalate helps Digital Advertising ecosystem become a safer and more trustworthy place to transact in, by providing intelligence on "bad actors" using our world class data. Our products provide benchmarks, analytics, research and threat intelligence solutions to the global media industry. We make this happen by processing terabytes of data and trillions of data points a day across desktop, mobile, tablets, connected-tv that are generated using Machine Learning and Artificial Intelligence based models.\n\n\nWe are the World's #1 decision making platform for Digital Advertising. And don't just take our word for it -- Forrester Research consistently depends on our monthly indexes to make industry predictions.\n\n\n\n\nWhat does the media have to say about us?\n\n\n\n*  Harvard Business Review\n\n* Forbes\n\n* NBC News \n\n* CNBC\n\n* Business Insider\n\n* AdAge\n\n* AdAge\n\n* CSO Online\n\n* Mediapost\n\n* Mediapost\n\n* The Drum\n\n* Mediapost\n\n* Mediapost\n\n\n\n\n\nHow is it working at Pixalate?\n\n\nWe believe in Small teams that produce high output\n\n\nSlack is a way of life, short emails are encouraged\n\n\nFearless attitude holds high esteem\n\n\nBold ideas are worshipped\n\n\nChess players do really well\n\n\nTitles don't mean much, you attain respect by producing results\n\n\nEveryone's a data addict and an analytical thinker (you won't survive if you run away from details)\n\n\nCollaboration, collaboration, collaboration\n\n\nWhat will you do?\n\n\nSupport existing processes running in production\n\n\nDesign, develop, and support of various big data solutions at scale (hundreds of Billions of transactions a day)\n\n\nFind smart, fault tolerant, self-healing, cost efficient solutions to extremely hard data problems\n\n\nTake ownership of the various big data solutions, troubleshoot issues, and provide production support\n\n\nConduct research on new technologies that can improve current processes\n\n\nContribute to publications of case studies and white papers delivering cutting edge research in the ad fraud, security and measurement space\n\n\nWhat are the minimum requirements for this role?\n\n\nBachelors, Masters or Phd in Computer Science, Computer Engineering, Software Engineering, or other related technical field.\n\n\nA minimum of 3 years of experience in a software or data engineering role\n\n\nExcellent teamwork and communication skills\n\n\nExtremely analytical, critical thinking, and problem solving abilities\n\n\nProficiency in Java\n\n\nVery strong knowledge of SQL and ability to implement advanced queries to extract information from very large datasets\n\n\nExperience in working with very large datasets using big data technologies such as Spark, BigQuery, Hive, Hadoop, Redshift, etc\n\n\nAbility to design, develop and deploy end-to-end data pipelines that meet business requirements.\n\n\nStrong experience in AWS and Google Cloud platforms is a big plus\n\n\nDeep understanding of computer science concepts such as data structures, algorithms, and algorithmic complexity\n\n\nDeep understanding of statistics and machine learning algorithms foundations is a huge plus\n\n\nExperience with Machine Learning big data technologies such as R, Spark ML, H2O, Mahout etc is a plus\n\n\nWhat do we have to offer?\n\n\nLocated in sunny Palo Alto and Playa Vista, CA the core of Pixalate's DNA lies in innovation. We focus on doing things differently and we challenge each other to be the best we can be. We offer:\n\n\nExperienced leadership and founding team\n\n\nCasual environment (as long as you wear clothes, we're good!)\n\n\nFlexible hours (yes, we mean it - you will never have to sit in traffic anymore!)\n\n\nFREE Lunches! (You name it, we've got it)\n\n\nFun team events\n\n\nHigh performing team who wants to win and have fun doing it\n\n\nExtremely Competitive Compensation\n\n\nOPPORTUNITY (Pixalate will be what you make it)


See more jobs at Pixalate

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

IQVIA The Human Data Science Company


closed

scala

 

senior

 
This job post is closed and the position is probably filled. Please do not apply.
\nWe are looking for creative, intellectually curious and entrepreneurial Big Data Software Engineers to join our London-based team.\n\nThe team\n\nJoin a high-profile team to work on ground-breaking problems in health outcomes across disease areas including Ophthalmology, Oncology, Neurology, Chronic diseases such as diabetes, and a variety of very rare conditions. Work hand-in-hand with statisticians, epidemiologists and disease area experts across the wider global RWE Solutions team, leveraging a vast variety of anonymous patient-level information from sources such as electronic health records; The data encompasses IQVIA’s access to over 530 million anonymised patients as well as bespoke, custom partnerships with healthcare providers and payers. \n\nThe role\n\nAs part of a highly talented Engineering and Data Science team, write highly performant and scalable code that will run on top of our Big Data platform (Spark/Hive/Impala/Hadoop). Collaborate with Data Science & Machine Learning experts on the ETL process, including the cohort building efforts. \n\nWhat to expect:\n\n\n* Working in a cross-functional team – alongside talented Engineers and Data Scientists\n\n* Building scalable and high-performant code\n\n* Mentoring less experienced colleagues within the team\n\n* Implementing ETL and Feature Extractions pipelines\n\n* Monitoring cluster (Spark/Hadoop) performance\n\n* Working in an Agile Environment\n\n* Refactoring and moving our current libraries and scripts to Scala/Java\n\n* Enforcing coding standards and best practices\n\n* Working in a geographically dispersed team\n\n* Working in an environment with a significant number of unknowns – both technically and functionally.\n\n\n\n\nOur ideal candidate: Essential experience \n\n\n* BSc or MSc in Computer Science or related field\n\n* Strong analytical and problem solving skills with personal interest in subjects such as math/statistics, machine learning and AI.\n\n* Solid knowledge of data structures and algorithms\n\n* Proficient in Scala, Java and SQL\n\n* Strong experience with Apache Spark, Hive/Impala and HDFS\n\n* Comfortable in an Agile environment using Test Driven Development (TDD) and Continuous Integration (CI)\n\n* Experience refactoring code with scale and production in mind\n\n* Familiar with Python, Unix/Linux, Git, Jenkins, JUnit and ScalaTest\n\n* Experience with integration of data from multiple data sources\n\n* NoSQL databases, such as HBase, Cassandra, MongoDB\n\n* Experience with any of the following distributions of Hadoop - Cloudera/MapR/Hortonworks.\n\n\n\n\nBonus points for experience in: \n\n\n* Other functional Languages such as Haskell and Clojure\n\n* Big Data ML toolkits such as Mahout, SparkML and H2O\n\n* Apache Kafka, Apache Ignite and Druid\n\n* Container technologies such as Docker\n\n* Cloud Platforms technologies such as DCOS/Marathon/Apache Mesos, Kubernetes and Apache Brooklyn.\n\n\n\n\nThis is an exciting opportunity to be part of one of the world's leading Real World Evidence-based teams, working to help our clients answer specific questions globally, make more informed decisions and deliver results.\n\nOur team within the Real-World & Analytics Solutions (RWAS) Technology division is a fast growing group of collaborative, enthusiastic, and entrepreneurial individuals. In our never-ending quest for opportunities to harness the value of Real World Evidence (RWE), we are at the centre of IQVIA’s advances in areas such as machine learning and cutting-edge statistical approaches. Our efforts improve retrospective clinical studies, under-diagnosis of rare diseases, personalized treatment response profiles, disease progression predictions, and clinical decision-support tools.\n\nWe invite you to join IQVIA™.\n\nIQVIA is a strong advocate of diversity and inclusion in the workplace.  We believe that a work environment that embraces diversity will give us a competitive advantage in the global marketplace and enhance our success.  We believe that an inclusive and respectful workplace culture fosters a sense of belonging among our employees, builds a stronger team, and allows individual employees the opportunity to maximize their personal potential.


See more jobs at IQVIA The Human Data Science Company

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Ultra Tendency


closed

dev

 
This job post is closed and the position is probably filled. Please do not apply.
\nYour Responsibilities:\n\n\n\n\n* Deliver value to our clients in all phases of the project life cycle\n\n* Convert specifications to detailed instructions and logical steps followed by their implementation\n\n* Build program code, test and deploy to various environments (Cloudera, Hortonworks, etc.)\n\n* Enjoy being challenged and solve complex data problems on a daily basis\n\n* Be part of our newly formed team in Berlin and help driving its culture and work attitude\n\n\n\n\n\n\nJob Requirements\n\n\n\n\n* Strong experience developing software using Java or comparable languages (e.g., Scala)\n\n* Practical experience with data ingestion, analysis, integration, and design of Big Data applications using Apache open-source technologies\n\n* Strong background in developing on Linux\n\n* Proficiency with the Hadoop ecosystem and its tools\n\n* Solid computer science fundamentals (algorithms, data structures and programming skills in distributed systems)\n\n* Sound knowledge of SQL, relational concepts and RDBMS systems is a plus\n\n* Computer Science (or equivalent degree) preferred or comparable years of experience\n\n* Being able to work in an English-speaking, international environment \n\n\n\n\n\n\nWe offer:\n\n\n\n\n* Fascinating tasks and interesting Big Data projects in various industries\n\n* Benefit from 10 years of delivering excellence to our customers\n\n* Work with our open-source Big Data gurus, such as our Apache HBase committer and Release Manager\n\n* Work on the open-source community and become a contributor\n\n* Learn from open-source enthusiasts which you will find nowhere else in Germany!\n\n* Fair pay and bonuses\n\n* Enjoy our additional benefits such as a free BVG ticket and fresh fruits in the office\n\n* Possibility to work remotely or in one of our development labs throughout Europe\n\n* Work with cutting edge equipment and tools\n\n\n\n\n


See more jobs at Ultra Tendency

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Anchormen


closed

senior

 
This job post is closed and the position is probably filled. Please do not apply.
\nOverview\n\n\nAnchormen is growing rapidly! Therefore, we are looking for additional experienced Big Data Engineers to serve our customer base at a desired level. This entails giving advise, building and maintaining Big Data platforms and employing data science solutions/models in enterprise environments.\n\nWe build and deliver data driven solutions that do not depend on one specific tool or technology. As independent consultant and engineer, your knowledge and experience will be a major contribution to our colleagues and customers. A diverse and challenging position, where technology is paramount. Are you joining our team?\n\n\nResponsibilities\n\n\n\n* You will be working on 1 to 3 different projects at any given time.\n\n* On average you will work 50% of the time at the Anchormen office and 50% at the client's location.\n\n* You work closely together with business to achieve data excellence.\n\n* You have a pro-active attitude towards the needs of the client.\n\n* You will be building test-driven software.\n\n* You gather data from external API’s and internal sources and add value to the data platform.\n\n* You work closely together with data scientists to bring machine learning algorithms into a production environment.\n\n\n\n\n\nYour profile\n\n\n\n* You work and think at a Bachelor’s or Master's level.\n\n* You have a minimum of two years experience in a similar position.\n\n* You have knowledge about OO and functional programming in languages such as: Java, Scala and Python (knowledge of several languages is a plus).\n\n* You have knowledge and experience with building and implementing API’s on a large scale.\n\n* You have thorough knowledge of SQL.\n\n* You believe in the principle of ”clean coding”; you don’t just write code for yourself or a computer, but for your colleagues as well.\n\n* You have hands-on experience with technologies such as: Hadoop, Spark, Kafka, Cassandra, HBase, Hive, Elastic, etc.\n\n* You are familiar with the Agile Principles.\n\n* You are driven to keep self-developing and following the latest technologies.\n\n\n\n\n\nAbout Anchormen\n\n\nWe help our clients to use Big Data in a smart way, which leads to new insights, knowledge and efficiency. We advise our clients on designing their Big Data platform. Our consultants provide advice, implement the appropriate products, and create complex algorithms to do the proper analyses and predictions.\n\n\nWhy Anchormen\n\n\nAnchormen has an open working environment. Everyone is open to initiatives. You can be proactive in these, and have every freedom to allow your work to be part of our success. We don’t believe in micro-management, but give our people the freedom to function optimally. Hard work naturally also plays a part – but with enjoyment!\n\n\nWhat we offer\n\n\n\n* Flexibility in working from home.\n\n* Competitive market salary.\n\n* Training and development budget for employees’ personal growth.\n\n* Being part of a fast-growing and innovative company.\n\n* Travel allowance.\n\n* Friendly and cooperative colleagues.\n\n* Daily office fruit and snacks.\n\n* All the coffee you can consume!\n\n\n


See more jobs at Anchormen

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

AdRoll


closed

exec

 
This job post is closed and the position is probably filled. Please do not apply.
\nAbout the Role:\n\nAdRoll's data infrastructure processes 100TB of compressed data, 4 trillion events, and 100B real time events daily on a scalable, highly available platform. As a member of the data & analytics team, you will work closely with data engineers, analysts, and data scientists to develop novel systems, algorithms, and processes to handle massive amounts of data using languages such as Python and Java.\n\nResponsibilities:\n\n\n* Develop and operate our data pipeline & infrastructure\n\n* Work closely with analysts and data scientists to develop data-driven dashboards and systems\n\n* Tackle some of the most challenging problems in high-performance, scalable analytics\n\n* Available for after hour issues and the ability to be on call, but aiming incessantly to reduce after hours incidents\n\n* Communicate with Product and Engineering Managers\n\n* Mentor Junior Engineers on the team\n\n\n\n\nQualifications:\n\n\n* A BS or MS degree in Computer Science or Computer Engineering, or equivalent experience\n\n* 4-6 years experience, atleast 2 of which include leading teams\n\n* Experience with scalable systems, large-scale data processing, and ETL pipelines\n\n* Experience with big data technologies such as Hadoop, Hive, Spark, or Storm\n\n* Experience with NoSQL databases such as Redis, Cassandra, or HBase\n\n* Experience with SQL and relational databases such as Postgres or MySQL\n\n* Experience developing and deploying applications on Linux infrastructure\n\n\n\n\nBonus Points:\n\n\n* Knowledge of Amazon EC2 or other cloud-computing services\n\n* Experience with Presto (https://prestodb.io/)\n\n\n\n\nCompensation:\n\n\n* Competitive salary and equity\n\n* Medical / Dental / Vision benefits\n\n* Paid time off and generous holiday schedule\n\n* The opportunity to win the coveted Golden Bagel award\n\n\n


See more jobs at AdRoll

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

AdRoll


closed

senior

 
This job post is closed and the position is probably filled. Please do not apply.
\nAbout the Role:\n\nAdRoll's data infrastructure processes 100TB of compressed data, 4 trillion events, and 100B real time events daily on a scalable, highly available platform. As a member of the data & analytics team, you will work closely with data engineers, analysts, and data scientists to develop novel systems, algorithms, and processes to handle massive amounts of data using languages such as Python and Java.\n\nResponsibilities:\n\n\n* Develop and operate our data pipeline & infrastructure\n\n* Work closely with analysts and data scientists to develop data-driven dashboards and systems\n\n* Tackle some of the most challenging problems in high-performance, scalable analytics\n\n* Available for after hour issues and the ability to be on call, but aiming incessantly to reduce after hours incidents\n\n* Available to assist Junior Engineers on the team\n\n\n\n\nQualifications:\n\n\n* A BS or MS degree in Computer Science or Computer Engineering, or equivalent experience\n\n* 3+ years experience\n\n* Experience with scalable systems, large-scale data processing, and ETL pipelines\n\n* Experience with big data technologies such as Hadoop, Hive, Spark, or Storm\n\n* Experience with NoSQL databases such as Redis, Cassandra, or HBase\n\n* Experience with SQL and relational databases such as Postgres or MySQL\n\n* Experience developing and deploying applications on Linux infrastructure\n\n\n\n\nBonus Points:\n\n\n* Knowledge of Amazon EC2 or other cloud-computing services\n\n* Experience with Presto (https://prestodb.io/)\n\n\n\n\nCompensation:\n\n\n* Competitive salary and equity\n\n* Medical / Dental / Vision benefits\n\n* Paid time off and generous holiday schedule\n\n* The opportunity to win the coveted Golden Bagel award\n\n\n


See more jobs at AdRoll

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Hotjar


closed
Valletta

amazon

 

elasticsearch

 

python

 
This job post is closed and the position is probably filled. Please do not apply.
**Note: although this is a remote position, we are currently only seeking candidates in time zones between UTC-2 and UTC+7.**\n\n\n\nHotjar is looking for a driven and ambitious DevOps Engineer with Big Data experience to support and expand our cloud-based infrastructure used by thousands of sites around the world. The Hotjar infrastructure currently processes more than 7500 API requests per second, delivers over a billion pieces of static content every week and hosts databases well into terabyte-size ranges, making this an interesting and challenging opportunity. As Hotjar continues to grow rapidly, we are seeking an engineer who has experience dealing with high traffic cloud based applications and can help Hotjar scale as our traffic multiplies. \n\n\n\nThis is an excellent career opportunity to join a fast growing remote startup in a key position.\n\n\n\nIn this position, you will:\n\n\n\n- Be part of our DevOps team building and maintaining our web application and server environment.\n\n- Choose, deploy and manage tools and technologies to build and support a robust infrastructure.\n\n- Be responsible for identifying bottlenecks and improving performance of all our systems.\n\n- Ensure all necessary monitoring, alerting and backup solutions are in place.\n\n- Do research and keep up to date on trends in big data processing and large scale analytics.\n\n- Implement proof of concept solutions in the form of prototype applications.\n\n\n\n\n\n \n\n \n\n#Salary and compensation\n - /year\n\n\n#Location\nValletta


See more jobs at Hotjar

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

SkyTruth


closed
This job post is closed and the position is probably filled. Please do not apply.
\nThis is an extraordinary opportunity to get to use cutting-edge big data and machine learning tools while doing something good for the planet and open-sourcing all your code.\n\nSkyTruth is seeking an engineer to join the team that is building Global Fishing Watch which is a partnership of SkyTruth, Oceana and Google, supported by Leonardo DiCaprio, and dedicated to saving the world's oceans from ruinous overfishing [Wired],   Our team works directly with Google engineers that support Cloud ML, TensorFlow and DataFlow and we are a featured Google partner.\n\nhttps://cloud.google.com/customers/global-fishing-watch/\n\nhttps://environment.google/projects/fishing-watch/\n\nhttps://blog.google/products/maps/mapping-global-fishing-activity-machine-learning/\n\nYour job is to develop, improve and operationalize the multiple pipelines we use to process terrabytes of vessel tracking data collected by a constellation of satellites.  We have a data set containing billions of vessel position reports, from which we derive behaviors based on movement characteristics using Cloud ML, and publish a dynamically updated map of global commercial fishing activity.\n\nYou will join a fully distributed team of engineers, data scientists and designers who are building and open sourcing the next generation of the product and who are very committed to creating a positive impact in the world while also solving novel problems using cutting edge tools. \n\nThe company is headquartered in Washington DC, the data science team is in San Francisco, and we have engineers in the US, Europe, South America and Indonesia.  Daily scrums are scheduled around east coast US timezone (so that kind of sucks for the guy in Indonesia :-)\n\nBecause this is open to remote work, we will get a lot of applicants. We are not just looking for an engineer with great skills that wants to work with cool tech.  We also want you to be inspired by the project, so please tell us something that excites you about what we're doing when you contact us. \n\nHere's some more stuff you can read about the impact our work has:\n\nNew York Times: Palau vs the Poachers\n\nScience: Ending hide and seek at sea\n\nWashington Post: How Google is helping to crack down on illegal fishing — from space


See more jobs at SkyTruth

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Spinn3r


closed

java

 
This job post is closed and the position is probably filled. Please do not apply.
\nCompany\n\nSpinn3r is a social media and analytics company looking for a talented Java “big data” engineer. \n\nAs a mature, ten (10) year old company, Spinn3r provides high-quality news, blogs and social media data for analytics, search, and social media monitoring companies.   We’ve just recently completed a large business pivot, and we’re in the process of shipping new products so it's an exciting time to come on board!\n\nIdeal Candidate\n\nWe're looking for someone with a passion for technology, big data, and the analysis of vast amounts of content; someone with experience aggregating and delivering data derived from web content, and someone comfortable with a generalist and devops role.  We require that you have a knowledge of standard system administration tasks, and have a firm understanding modern cluster architecture.  \n\nWe’re a San Francisco company, and ideally there should be least a 4 hour overlap with the Pacific Standard Time Zone (PST / UTC-8).  If you don't have a natural time overlap with UTC-8 you should be willing to work an alternative schedule to be able to communicate easily with the rest of the team.  \n\nCulturally, we operate as a “remote” company and require that you’re generally available for communication and are self-motivated and remain productive.\n\nWe are open to either a part-time or full-time independent contractor role.\n\nResponsibilities\n\n\n* Understanding our crawler infrastructure;\n\n* Ensuring top quality metadata for our customers. There's a significant batch job component to analyze the output to ensure top quality data;\n\n\n\n\n\n* Making sure our infrastructure is fast, reliable, fault tolerant, etc.  At times this may involve diving into the source of tools like ActiveMQ to understand how the internals work.  We contribute to Open Source development to give back to the community; and\n\n\n\n\n\n* Building out new products and technology that will directly interface with customers. This includes cool features like full text search, analytics, etc. It's extremely rewarding to build something from ground up and push it to customers directly. \n\n\n\n\nArchitecture\n\nOur infrastructure consists of Java on Linux (Debian/Ubuntu) with the stack running on ActiveMQ, Zookeeper, and Jetty.  We use Ansible to manage our boxes. We have a full-text search engine based on Elasticsearch which also backs our Firehose API.\n\nHere's all the cool products that you get to work with:\n\n\n* Large Linux / Ubuntu cluster running with the OS versioned using both Ansible and our own Debian packages for software distribution;\n\n* Large amounts of data indexed from the web and social media.  We index from 5-20TB of data per month and want to expand to 100TB of data per month; and \n\n* SOLR / Elasticsearch migration / install.  We’re experimenting with bringing this up now so it would be valuable to get your feedback.\n\n\n\n\nTechnical Skills\n\nWe're looking for someone with a number of the following requirements:\n\n\n* Experience in modern Java development and associated tools: Maven, IntelliJ IDEA, Guice (dependency injection);\n\n* A passion for testing, continuous integration, and continuous delivery;\n\n\n\n\n\n* ActiveMQ. Powers our queue server for scheduling crawl work;\n\n\n\n\n\n* A general understanding and passion for distributed systems;\n\n* Ansible or equivalent experience with configuration management; \n\n* Standard web API use and design. (HTTP, JSON, XML, HTML, etc.); and\n\n* Linux, Linux, Linux.  We like Linux!\n\n\n\n\n\nCultural Fit\n\nWe’re a lean startup and very driven by our interaction with customers, as well as their happiness and satisfaction. Our philosophy is that you shouldn’t be afraid to throw away a week's worth of work if our customers aren’t interested in moving in that direction.\n\nWe hold the position that our customers are our responsibility and we try to listen to them intently and consistently:\n\n\n* Proficiency in English is a requirement. Since you will have colleagues in various countries with various primary language skills we all need to use English as our common company language. You must also be able to work with email, draft proposals, etc. Internally we work as a large distributed Open Source project and use tools like email, slack, Google Hangouts, and Skype; \n\n* Familiarity working with a remote team and ability (and desire) to work for a virtual company. Should have a home workstation, and fast Internet access, etc.;\n\n* Must be able to manage your own time and your own projects.  Self-motivated employees will fit in well with the rest of the team; and\n\n* It goes without saying; but being friendly and a team player is very important.\n\n\n\n\nCompensation\n\n\n* Salary based on experience;\n\n* We're a competitive, great company to work for; and\n\n* We offer the ability to work remotely, allowing for a balanced live-work situation.\n\n\n\n\n\n\n


See more jobs at Spinn3r

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Hotjar


closed

devops

 

devops

This job post is closed and the position is probably filled. Please do not apply.
\nNote: Although this is a remote position, we are currently only seeking candidates in timezones between UTC-2 and UTC+7.\n\nHotjar is looking for a driven and ambitious DevOps Engineer with Big Data experience to support and expand our cloud-based infrastructure used by thousands of sites around the world. The Hotjar infrastructure currently processes more than 7500 API requests per second, delivers over a billionpieces of static content every week and hosts databases well into terabyte-size ranges, making this an interesting and challenging opportunity. As Hotjar continues to grow rapidly, we are seeking an engineer who has experience dealing with high traffic cloud based applications and can help Hotjar scale as our traffic multiplies. \n\nThis is an excellent career opportunity to join a fast growing remote startup in a key position.\n\nIn this position, you will:\n\n\n* Be part of our DevOps team building and maintaining our web application and server environment.\n\n* Choose, deploy and manage tools and technologies to build and support a robust infrastructure.\n\n* Be responsible for identifying bottlenecks and improving performance of all our systems.\n\n* Ensure all necessary monitoring, alerting and backup solutions are in place.\n\n* Do research and keep up to date on trends in big data processing and large scale analytics.\n\n* Implement proof of concept solutions in the form of prototype applications.\n\n\n


See more jobs at Hotjar

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Hazelcast


closed
This job post is closed and the position is probably filled. Please do not apply.
\nWould you like to work on a new and exciting big data project? Do you enjoy any of the following?\n\n\n* Solving complex problems around distributed data processing.\n\n* Implementing non-trivial infrastructure code.\n\n* Creating well crafted and thoroughly tested features, taking full-responsibility from the design phase.\n\n* Paying attention to all aspects of code quality, from clean-code, to allocation-rates.\n\n* Digging into mechanical sympathy concepts.\n\n* Delivering a technical presentation at a conference.\n\n\n\n\nAt Hazelcast you will have the opportunity to work with some of the best engineers out there:\n\n\n* Who delve into JVM code.\n\n* Who implement and scrutinize garbage collection algorithms.\n\n* Who take any piece of software and multiply its performance by applying deep technical understanding. \n\n* Who regularly squash bugs in the depths of a JVM\n\n\n\n\nWe are looking for people who can deliver solid production code. You may either work in our office in London, Istanbul or code remotely from a home office. It is also preferable that you are within a few hours of the CET timezone as this is where most of the developers are based.


See more jobs at Hazelcast

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Clear Returns


closed
This job post is closed and the position is probably filled. Please do not apply.
\nThis is an exciting opportunity that incorporates managing all technical and engineering aspects of the analytics infrastructure. You will maintain and improve the data warehouse and ensure that data obtained from diverse and varied sources is appropriately captured, cleaned, and utilised.  As a member of a small team you could also get the opportunity to be involved in Data Science projects, though this is not a prerequisite of the role. You will have significant influence on the future of the technologies used by the team which is central to the company’s growth strategy.


See more jobs at Clear Returns

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Crossover


closed

senior

 

dev

This job post is closed and the position is probably filled. Please do not apply.
\nAre you a Senior Software Engineer that has spent several years working with Big Data technologies? Have you created streaming analytics algorithms to process terabytes of real-time data and deployed Hadoop or Cassandra clusters over dozens of VMs? Have you been part of an organization driven by a DevOps culture, where the engineer has end to end responsibility for the product, from development to operating it in production? Are you willing to join a team of elite engineers working on a fast growing analytics business? Then this role is for you! \n \nJob Description \nThe Software and DevOps engineer will help build the analytics platform for Bazaarvoice data that will power our client-facing reporting, product performance reporting, and financial reporting. You will also help us operationalize our Hadoop clusters, Kafka and Storm services, high volume event collectors, and build out improvements to our custom analytics job portal in support of Map/Reduce and Spark jobs. The Analytics Platform is used to aggregate data sets to build out known new product offerings related to analytics and media as well as a number of pilot initiatives based on this data. You will need to understand the business cases of the various products and build a common platform and set of services that help all of our products move fast and iterate quickly. You will help us pick and choose the right technologies for this platform. \nKey Responsibilities \n \nIn your first 90 days you can expect the following: \n * \nAn overview of our Big Data platform code base and development model \n * \nA tour of the products and technologies leveraging the Big Data Analytics Platform \n * \n4 days of Cloudera training to provide a quick ramp up of the technologies involved \n * \nBy the end of the 90 days, you will be able to complete basic enhancements to code supporting large-scale analytics using Map/Reduce as well as contribute to the operational maintenance of a high volume event collection pipeline. \n \n \n \nWithin the first year you will: \n * \nOwn design, implementation, and support of major components of platform development. This includes working with the various stakeholders for the platform team to understand their requirements and delivery high leverage capabilities. \n * \nHave a complete grasp of the technology stack, and help guide where we go next. \n \n


See more jobs at Crossover

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Instructure


closed

senior

 

dev

This job post is closed and the position is probably filled. Please do not apply.
Instructure was founded to define, develop, and deploy superior, easy-to-use software. (And that’s what we did / do / will keep on doing.) We are dedicated to the fight against iffy, mothbally, shoddy software. We make better, more usable tools for teaching and learning (you know, stuff people will actually use). A better connected and more open edtech ecosystem. And more effective ways for everyone everywhere to access education, make discoveries, share knowledge, be inspired, and do big things. We accomplish all this by giving smart, creative, passionate people opportunities to create awesome. So here’s your opportunity.\n\nWe are hiring engineers passionate about using data to gain insight, drive behavior and improve our products. Our software helps millions of users learn and grow. Come help accelerate the learning process by developing data centric features for K-12, higher education and corporate users.\n\n\n\n\n\nWHAT YOU WILL BE DOING:\n\n\n\n\n* The Instructure suite of SaaS applications produces terabytes of events and student information weekly. Your challenge will be to create the systems that organize this data and return insights to students, teachers and administrators. You will also work to integrate data driven features into core Instructure products. \n\n* This team engineers the data and analytics platform for the entire Instructure application portfolio. This is a growing team at Instructure with the opportunity to provide tangible positive impact to the business and end users. We are looking for creative, self-motivated, highly collaborative, extremely technical people who can drive a vision to reality.\n\n\n\n\n


See more jobs at Instructure

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Convertro


closed
This job post is closed and the position is probably filled. Please do not apply.
\nDo you want to solve real-world business problems with cutting edge technology in a creative and exciting start-up? Are you a smart person who gets stuff done?\n \n Convertro is looking for you. We are hiring an engineer with experience building analytical systems in Map Reduce, Hadoop, Hbase, or similar distributed systems programming. You will improve the scalability, flexibility, and stability of our existing Hadoop architecture as well as help develop our next generation data analytics platform. You will rapidly create prototypes and quickly iterate to a stable, production-quality release candidate.


See more jobs at Convertro

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

American Express


closed
This job post is closed and the position is probably filled. Please do not apply.
\nAmerican Express is looking for energetic, high-performing software engineers to help shape our technology and product roadmap. You will be part of the fast-paced big data team. As a part of the Customer Marketing and Big Data Platforms organization, that enables Big Data and batch/real-time analytical solutions leveraging transformational technologies (Hadoop, HDFS, MapReduce, Hive, HBase, Pig, etc.) you will be working on innovative platform and data science projects across multiple business units (e.g., RIM, GNICS, OPEN, CS, EG, GMS, etc.). Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.\n\n\n\nOffer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.\nQualifications\n·Hands-on expertise with application design, software development, and automated testing - Experience collaborating with the business to drive requirements/Agile story analysis \n\n·Ability to effectively interpret technical and business objectives and challenges, and articulate solutions \n\n·Ability to think abstractly and deal with ambiguous/under-defined problems - Ability to enable business capabilities through innovation \n\n·Looks proactively beyond the obvious for continuous improvement opportunities \n\n·High energy, demonstrated willingness to learn new technologies, and takes pride in how fast they develop working software \n\n·Strong programming knowledge in C++ / Java \n\n·Solid understanding of data structures and common algorithms \n\n·Knowledge of RDBMS concepts and experience with SQL \n\n·Understanding and experience with UNIX / Shell / Perl / Python scripting \n\n·Experience in Big Data Components/ Frameworks (Hadoop, HBase, HDFS, Pig, Hive, Sqoop, Flume, Ozie, Avro, etc.) and other AJAX tools/Framework \n\n·Database query optimization and indexing Bonus skills: Object-oriented design and coding with variety of languages: Java, J2EE and Parallel and distributed system \n\n·Machine learning/data mining \n\n·Web services design and implementation using REST / SOAP - Bug-tracking, source control, and build system\n\nAmerican Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other status protected by law. Click here to view the 'EEO is the Law' poster.\n\n\n\nReqID: 15017390


See more jobs at American Express

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Hearst Corporation


closed

full stack

 
This job post is closed and the position is probably filled. Please do not apply.
\nWe’re creating a game-changing modern content platform - built from the ground up.  It will give our users, editors, and advertisers tools that enable them to react to the world in real-time in making decisions around content publishing and revenue generation. We are doing this by working with Big Data scientists to build a modern information pipeline to enable intelligent and optimized media applications.  We’re using modern web technologies to do this. We’re building an open, service-oriented platform driven by APIs, and believe passionately in crafting simple, elegant solutions to complex technological and product problems.  Our day to day is much like a technology start-up company - with the strong support of a large corporation that believes in what we're doing.\n\nWe’re hiring talented and passionate Software Engineers to be part of a corporate open-source movement in the company to build out our new platform. The ideal candidate has extensive experience writing clean object-oriented code, building and working with RESTful APIs, has worked in cloud based environments like AWS and likes being part of a collaborative tech team.\n\nWe consistently hold ourselves to high standards of software development, code review and deployment.  Our workflow embraces automated testing and continuous integration.  We work closely with our DevOps team to allow for developers to focus on what they do best - creatively build innovative software solutions.


See more jobs at Hearst Corporation

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Turbine WB Games


closed

senior

 
This job post is closed and the position is probably filled. Please do not apply.
\nWBPlay – a team within Turbine that is responsible for delivering key technology platforms that support games across WB – is seeking a Senior Big Data Engineer to provide hands on development within our Core Analytics Platform team. As a key contributor reporting directly to the Director of Analytics Platform Development, this individual will work closely with developers and dev-ops engineers across multiple teams to build and operate a best-in-class game analytics platform.\n\n\nThe successful candidate will participate in software development and dev-ops projects, using Agile methodologies, to build scalable, reliable technologies and infrastructure for our cross-game data analytics platform. This big data platform powers analytics for WB’s games across multiple networks, devices and operating environments, including Xbox One, PS4, IOS, and Android. This is a role combining proven technical skills in various Big Data ecosystems, with a strong focus on open-source (Apache) software and cloud (AWS) infrastructure.\n\n\nOur ideal candidate is fluent in several big data technologies - including Hadoop, Spark, MPP databases, and NoSQL databases - and has deep experience in implementation of complex distributed computing environments which ingest, process, and surface hundreds of terabytes of data from dozens of sources, in near real time, for analysis by data scientists and other stakeholders.\n\nJOB RESPONSIBILITIES\n\n\n* \n\nResponsible for the building, deployment, and maintenance of mission critical analytics solutions that process data quickly at big data scales\n\n\n* \n\nContributes design, code, configurations, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, and loading across multiple game franchises.\n\n\n* \n\nOwns one or more key components of the infrastructure and works to continually improve it, identifying gaps and improving the platform’s quality, robustness, maintainability, and speed.\n\n\n* \n\nCross-trains other team members on technologies being developed, while also continuously learning new technologies from other team members.\n\n\n* \n\nInteracts with engineering teams across WB and ensures that solutions meet customer requirements in terms of functionality, performance, availability, scalability, and reliability.\n\n\n* \n\nPerforms development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions.\n\n\n* \n\nWorks directly with business analysts and data scientists to understand and support their usecases\n\n\n\n


See more jobs at Turbine WB Games

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Verizon


closed
This job post is closed and the position is probably filled. Please do not apply.
\nGrow your IT career at one of the leading global technology companies. We offer hands-on exposure to state-of-the-art systems, applications and infrastructures.\n\nResponsibilities\n\n\n* Architect, Design and build big data platform primarily based on Hadoop echo system that is fault-tolerant & scalable.\n\n* Build high throughput messaging framework to transport high volume data.\n\n* Use different protocols as needed for different data services (NoSQL/JSON/REST/JMS).\n\n* Develop framework to deploy Restful web services.\n\n* Build ETL, distributed caching, transactional and messaging services.\n\n* Architect and build security compliant user management framework for multitenant big data platform.\n\n* Build High-Availability (HA) architectures and deployments primarily using big data technologies.\n\n* Creating and managing Data Pipelines.\n\n\n


See more jobs at Verizon

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Verizon


closed
This job post is closed and the position is probably filled. Please do not apply.
\nStay on the front lines of groundbreaking technology. Were committed to a dynamic, ever-evolving infrastructure and the hard work it takes to keep our reliable network thriving. Help support the growing demands of an interconnected world.\n\nResponsibilities\n\nVerizon Corporate Technology's Big Data Group is looking for Big Data engineers with expert level experience in architecting and building our new Hadoop, NoSql, InMemory Platforms(s) and data collectors. You will be part of the team building worlds one of the largest Big Data Platform(s) that can ingest 100’s of Terabytes of data that will be consumed for Business Analytics, Operational Analytics, Text Analytics, Data Services and build Big Data Solutions for various Verizon Business units\n\nThis is a unique opportunity to be part of building disruptive technology where Big Data will be used as platform to build solutions for Analytics, Data Services and Solutions.\n\nResponsibility :\n\n\n* Hands on contribution to biz logic using Hadoop echo system (Java MR, PIG, Scala, Hbase, Hive)\n\n* Work on technologies related to NoSQL, SQL and InMemory platform(s)\n\n\n


See more jobs at Verizon

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Verizon


closed
This job post is closed and the position is probably filled. Please do not apply.
\nGrow your IT career at one of the leading global technology companies. We offer hands-on exposure to state-of-the-art systems, applications and infrastructures.\n\nResponsibilities\n\nVerizon Corporate Technology's Big Data Group is looking for Big Data engineers with expert level experience in building our new Hadoop, NoSql, InMemory Platforms(s) ,data collectors and applications. You will be part of the team building worlds one of the largest Big Data Platform(s) that can ingest 100’s of Terabytes of data that will be consumed for Business Analytics, Operational Analytics, Text Analytics, Data Services and build Big Data Solutions for various Verizon Business units\n\nResponsibility:\n\n\n* Architect, Design and build big data platform primarily based on Hadoop echo system that is fault-tolerant & scalable.\n\n* Build high throughput messaging framework to transport high volume data.\n\n* Responsible to provide guidance to members on the team to build complex high throughput big data subsystems.\n\n* Use different protocols as needed for different data services (NoSQL/JSON/REST/JMS).\n\n* Develop framework to deploy Restful web services.\n\n* Build ETL, distributed caching, transactional and messaging services.\n\n* Architect and build security compliant user management framework for multitenant big data platform.\n\n* Build High-Availability (HA) architectures and deployments primarily using big data technologies.\n\n* Expert level experience with Hadoop echo system ( Spark, Hbase, Solr).\n\n* Creating and managing Data Pipelines.\n\n\n


See more jobs at Verizon

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Jet.com


closed
This job post is closed and the position is probably filled. Please do not apply.
\n“Engineers are Astronauts at Jet”\n- Mike Hanrahan, Jet’s CTO\n\n\nYou'll be responsible for helping to build a world class data platform to collect, process, and manage a vast amount of information generated by Jet's rapidly growing business.\n\n\n\nAbout Jet\n\nJet’s mission is to become the smartest way to shop and save on pretty much anything. Combining a revolutionary pricing engine, a world-class technology and fulfillment platform, and incredible customer service, we’ve set out to create a new kind of e-commerce.  At Jet, we’re passionate about empowering people to live and work brilliant.\n\n\nAbout Jet’s Internal Engine\n\nWe’re building a new kind of company, and we’re building it from the inside out, which means that investing in hiring, developing, and retaining the brightest minds in the world is a top priority. Everything we do is grounded in three simple values:  trust, transparency, and fairness.  From our business model to our culture, we live our values to the extreme, whether we’re dealing with employees, retail partners, or consumers.  We believe that happiness is the highest level of success and we want every person that crosses paths with Jet to achieve it.  If you’re an ambitious, smart, natural collaborator who likes taking risks, influencing, and innovating in a challenging hyper-growth environment, we’d love to talk to you about joining our team.\n\n\nAbout the Job\n\nWe are looking for an exceptional Data Engineer to help build a world class analytical platform to collect, store and expose both structured and un-structured data generated by a vastly growing system landscape at Jet.com.\n\nYou can expect a freewheeling, informal work environment, populated by a combination of folks from top companies that have produced many successful products, as well as some PhD’s that have escaped the ivory tower.\n\nWe have lots of perks like free lunches, but you will be so engrossed with the challenges of the job that the free stuff will be more like icing on the cake.\n\nBecause we work on cutting edge technologies, we need someone who is a creative problem solver, resourceful in getting things done, and productive working independently or collaboratively. This person would take on the following responsibilities:\n\n\n* Design, implement and manage a near real-time ingestion pipeline into a data warehouse and Hadoop data lake.\n\n* Gather and process raw data at scale - collect data across all business domains (our functional-first, event sourced, micro services backend) and expose mechanisms for large scale parallel processing\n\n* Process unstructured data into a form suitable for analysis and then empower state-of-the-art analysis for analysts, scientists, and APIs.\n\n* Support business decisions with ad hoc analysis as needed.\n\n* Evangelize an extremely high standard of code quality, system reliability, and performance.\n\n* Influence cross functional architecture in sprint planning.\n\n\n\n\nAbout You\n\n\n* Experience in running, using and trouble shooting the Apache Big Data stack i.e. Hadoop FS, Hive, HBase, Kafka, Pig, Oozie, Yarn.\n\n* Programming experience, ideally in Scala or F# but we are open to other experience if you’re willing to learn the languages we use.\n\n* Proficient scripting skills i.e. unix shell and/or powershell\n\n* Experience processing large amounts of structured and unstructured data with MapReduce.\n\n* We use Azure extensively, so experience with cloud infrastructure will help you hit the ground running.\n\n\n\n\nCompensation Philosophy\n\nOur compensation philosophy is simple but powerful. Give everyone a meaningful stake in the company—the purest form of ownership. That’s why on top of base salary, Jet’s comp structure is heavily weighted in equity. Our collective hard work, high performance, and tenure are rewarded as our equity builds in value.\n\n\nBenefits & Perks\n\nCompetitive Salaries.  Real Ownership in the form of Stock Options.  Unlimited Vacation.  Full Healthcare Benefits.  Exceptional Work Environment.  Learning & Development Opportunities.  Just for fun Networking & Events.


See more jobs at Jet.com

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Criteo


closed

java

 

senior

 
This job post is closed and the position is probably filled. Please do not apply.
\nCRITEO is looking to recruit senior software developers who turn it up to eleven for its R&D Center in Grenoble (South-East from France). Your main missions will be to :\n\n- Build systems that make the best decision in 50ms, half a million times per second. Across three continents and six datacenters, 24/7.\n\n- Find the signal hidden in tens of TB of data, in one hour, using over a thousand nodes on our Hadoop cluster. And constantly keep getting better at it while measuring the impact on our business.\n\n- Get stuff done. A problem partially solved today is better than a perfect solution next year. Have an idea during the night ? Code it in the morning, push it at noon, test it in the afternoon and deploy it the next morning.\n\n- High stakes, high rewards: 1% increase in performance may yield millions for the company. But if a single bug goes through, the Internet goes down (we’re only half joking).\n\n- Develop open source projects. Because we are working at the forefront of technology, we are dealing with problems that few have faced. We’re big users of open source, and we’d like to give back to the community.


See more jobs at Criteo

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

nugg.ad AG predictive behavioral targeting


closed
This job post is closed and the position is probably filled. Please do not apply.
\nWe are currently building our next generation data management platform and are searching for enthusiastic developers eager to join our team and push back the frontiers of big data processing in high-throughput architectures. Take the unique opportunity to shape and grow an early stage product which will have a significant impact across the advertising market.\n\nAs our Big-Data Engineer you will: \n\n\n* Design and build the core of our new platform\n\n* Identify and deploy the latest big data technologies that suit our challenges\n\n* Define new features and products together with our data-science, consulting and sales teams\n\n* Migrate existing solutions to our Spark/Scala-based architecture\n\n\n


See more jobs at nugg.ad AG predictive behavioral targeting

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Crossover


closed

senior

 

dev

This job post is closed and the position is probably filled. Please do not apply.
\nAre you a Senior Software Engineer that has spent several years working with Big Data technologies? Have you created streaming analytics algorithms to process terabytes of real-time data and deployed Hadoop or Cassandra clusters over dozens of VMs? Have you been part of an organization driven by a DevOps culture, where the engineer has end to end responsibility for the product, from development to operating it in production? Are you willing to join a team of elite engineers working on a fast growing analytics business? Then this role is for you! \n \nJob Description \nThe Software and DevOps engineer will help build the analytics platform for Bazaarvoice data that will power our client-facing reporting, product performance reporting, and financial reporting. You will also help us operationalize our Hadoop clusters, Kafka and Storm services, high volume event collectors, and build out improvements to our custom analytics job portal in support of Map/Reduce and Spark jobs. The Analytics Platform is used to aggregate data sets to build out known new product offerings related to analytics and media as well as a number of pilot initiatives based on this data. You will need to understand the business cases of the various products and build a common platform and set of services that help all of our products move fast and iterate quickly. You will help us pick and choose the right technologies for this platform. \n \nKey Responsibilities \nIn your first 90 days you can expect the following: \n * \nAn overview of our Big Data platform code base and development model \n * \nA tour of the products and technologies leveraging the Big Data Analytics Platform \n * \n4 days of Cloudera training to provide a quick ramp up of the technologies involved \n * \nBy the end of the 90 days, you will be able to complete basic enhancements to code supporting large-scale analytics using Map/Reduce as well as contribute to the operational maintenance of a high volume event collection pipeline. \n \n \nWithin the first year you will: \n * \nOwn design, implementation, and support of major components of platform development. This includes working with the various stakeholders for the platform team to understand their requirements and delivery high leverage capabilities. \n * Have a complete grasp of the technology stack, and help guide where we go next.\n \n


See more jobs at Crossover

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Crossover


closed

senior

 

dev

This job post is closed and the position is probably filled. Please do not apply.
\nAre you a Senior Software Engineer that has spent several years working with Big Data technologies? Have you created streaming analytics algorithms to process terabytes of real-time data and deployed Hadoop or Cassandra clusters over dozens of VMs? Have you been part of an organization driven by a DevOps culture, where the engineer has end to end responsibility for the product, from development to operating it in production? Are you willing to join a team of elite engineers working on a fast growing analytics business? Then this role is for you!\n\nJob Description\n\nThe Software and DevOps engineer will help build the analytics platform for Bazaarvoice data that will power our client-facing reporting, product performance reporting, and financial reporting. You will also help us operationalize our Hadoop clusters, Kafka and Storm services, high volume event collectors, and build out improvements to our custom analytics job portal in support of Map/Reduce and Spark jobs. The Analytics Platform is used to aggregate data sets to build out known new product offerings related to analytics and media as well as a number of pilot initiatives based on this data. You will need to understand the business cases of the various products and build a common platform and set of services that help all of our products move fast and iterate quickly. You will help us pick and choose the right technologies for this platform.\n\nKey Responsibilities\n\nIn your first 90 days you can expect the following:\n\n\n* An overview of our Big Data platform code base and development model\n\n* A tour of the products and technologies leveraging the Big Data Analytics Platform\n\n* 4 days of Cloudera training to provide a quick ramp up of the technologies involved\n\n* By the end of the 90 days, you will be able to complete basic enhancements to code supporting large-scale analytics using Map/Reduce as well as contribute to the operational maintenance of a high volume event collection pipeline.\n\n\n\n\nWithin the first year you will:\n\n\n* Own design, implementation, and support of major components of platform development. This includes working with the various stakeholders for the platform team to understand their requirements and delivery high leverage capabilities.\n\n* Have a complete grasp of the technology stack, and help guide where we go next.\n\n\n


See more jobs at Crossover

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Moz


closed

dev

 
This job post is closed and the position is probably filled. Please do not apply.
Full Time: Sr. Software Engineer- Big Data at Moz in Seattle, WA or Remote


See more jobs at Moz

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Crossover


closed

senior

 

dev

This job post is closed and the position is probably filled. Please do not apply.
\nAre you a Senior Software Engineer that has spent several years working with Big Data technologies? Have you created streaming analytics algorithms to process terabytes of real-time data and deployed Hadoop or Cassandra clusters over dozens of VMs? Have you been part of an organization driven by a DevOps culture, where the engineer has end to end responsibility for the product, from development to operating it in production? Are you willing to join a team of elite engineers working on a fast growing analytics business? Then this role is for you!\n\n\n\nJob Description\n\nThe Software and DevOps engineer will help build the analytics platform for Bazaarvoice data that will power our client-facing reporting, product performance reporting, and financial reporting.  You will also help us operationalize our Hadoop clusters, Kafka and Storm services, high volume event collectors, and build out improvements to our custom analytics job portal in support of Map/Reduce and Spark jobs.  The Analytics Platform is used to aggregate data sets to build out known new product offerings related to analytics and media as well as a number of pilot initiatives based on this data. You will need to understand the business cases of the various products and build a common platform and set of services that help all of our products move fast and iterate quickly. You will help us pick and choose the right technologies for this platform.\n\n\n\nKey Responsibilities\n\n\n\nIn your first 90 days you can expect the following:\n\n\n* \n\nAn overview of our Big Data platform code base and development model\n\n\n* \n\nA tour of the products and technologies leveraging the Big Data Analytics Platform\n\n\n* \n\n4 days of Cloudera training to provide a quick ramp up of the technologies involved\n\n\n* \n\nBy the end of the 90 days, you will be able to complete basic enhancements to code supporting large-scale analytics using Map/Reduce as well as contribute to the operational maintenance of a high volume event collection pipeline.\n\n\n\n\n\n\nWithin the first year you will:\n\n\n* \n\nOwn design, implementation, and support of major components of platform development. This includes working with the various stakeholders for the platform team to understand their requirements and delivery high leverage capabilities.\n\n\n* \n\nHave a complete grasp of the technology stack, and help guide where we go next.\n\n\n\n\n\n\nBazaarvoice is a network that connects brands and retailers to the authentic voices of people where they shop. Each month, more than 500 million people view and share authentic opinions, questions and experiences about tens of millions of products in the Bazaarvoice network. Our technology platform amplifies these voices into the places that influence purchase decisions. Network analytics help marketers and advertisers provide more engaging experiences that drive brand awareness, consideration, sales and loyalty. Headquartered in Austin, Texas, Bazaarvoice has offices in Chicago, London, Munich, New York, Paris, San Francisco, Singapore, and Sydney.\n\nTotal Compensation is $30 / hour


See more jobs at Crossover

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

MetricStory


closed

javascript

 

node js

 
This job post is closed and the position is probably filled. Please do not apply.
\nMetricStory is revolutionizing web analytics. Currently, it is painful to setup web analytics, create reports, and finally get insights out of the reports. Our goal is to make it easy for companies to capture and analyze customer data without having to code. To do this, we are storing and analyzing the full user clickstream. We are a recent Techstars funded company and we have expert domain knowledge in analytics. You'll be our first engineer and have real ownership, responsibility, and impact on the business. The perfect candidate is a senior / lead level engineer with a few years experience in building product and loves architecting complex systems. This position requires solving hard problems and is focused on writing scalable code to capture and analyze big data.\n\nWe are looking for an engineer that has experience and is passionate with storing large volumes of data and retrieving this data in seconds. The ideal candidate will have experience in storing large amounts of event data in a NoSQL database like DynamoDB and exporting/cleaning with Amazon EMR (HiveQL) to RedShift for fast access. This position requires working knowledge of best database structures for speed, large data sets, data cleaning, and how to transfer NoSQL data to SQL. If you are up for a serious technical challenge to help build this company from the ground up, then contact us!\n\nOur stack is NodeJS, DynamoDB, MongoDB, D3.js, Angular, Redis, Amazon Redshift, and plain vanilla Javascript.


See more jobs at MetricStory

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Showroom Logic


closed
This job post is closed and the position is probably filled. Please do not apply.
\nShowroom Logic is the 26th fastest-growing company in America. It powers paid-search, display & retargeting campaigns for thousands of auto dealerships nationwide with it's industry leading AdLogic platform. Our dev team is an elite group of individuals who love creating solutions to complex technical problems. Our full-time devs enjoy benefits for them and their families, very competitive salaries, periodic trips to Miami or Southern California, the flexibility of telecommuting, an extremely high level of trust, fun and skill-stretching projects. We are changing the way advertisers manage their digital marketing with our award-winning technology.\n\nPosition Summary:\n\nWe are looking for a Data Engineer to be responsible for retrieving, validating, analyzing, processing, cleansing, and managing of external data and internal data sources. This is not just a data warehousing position—a critical function of this job is to design and implement optimal ways to manage and analyze data. The Data Engineer is expected to learn existing processes, learn and apply 'Big Data' tools, and apply software development skills for automating processes, creating tools, and modifying existing processes for increased efficiency and scalability. \n\nKey functions include:\n\n\n* Developing tools for data processing and information retrieval (both batch processing and real-time querying)\n\n* Support existing projects where evaluating and providing data quality is vital to the product development process\n\n* Analyzing, processing, evaluating and documenting very large data sets\n\n* Providing RESTful APIs that other teams can use to store and retrieve data\n\n\n


See more jobs at Showroom Logic

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Tapway


closed
Malaysia
 
💰 $30k - $45k

python

 

dev

This job post is closed and the position is probably filled. Please do not apply.
Job Description:\n\n- Lead, design, develop and implement large-scale, real-time data processing systems by working with large structured and unstructured data from various complex sources.\n- Design, implement and deploy ETL to load data into NoSQL / Hadoop.\n- Performance fine-tuning of the data processing platform\n- Development of various API’s to interact with front-end and other data warehouses\n- Coordinate with web programmers to deliver a stable and highly available reporting platform\n- Coordinate with data scientist to integrate complex data models into the data processing platform.\n- Have fun in a highly dynamic team and drive innovations to continue as a leader in one of the fastest-growing industries\n\nJob Requirements:\n\n- Candidate must possess at least a Bachelor’s Degree in Computer Science, Information System or related discipline. MSc or PhD a plus.\n- Proficiency in Python\n- A strong background in interactive query processing\n- Experience with Big Data applications/solutions such as Hadoop, HBase, Hive, Cassandra, Pig etc. \n- Experience with NoSQL and handling large datasets\n- Passion and interest for all things distributed - file systems, databases and computational frameworks\n- Individual who is passionate, resourceful, self-motivated, highly committed, a team player and able to motivate others\n- Strong leadership qualities\n- Good verbal and written communication.\n- Must be willing in work in highly dynamic and challenging startup environment.\n \n\n#Salary and compensation\n$30,000 — $45,000/year\n \n\n#Equity\n1.0 - 3.0\n\n\n#Location\nMalaysia


See more jobs at Tapway

Visit Tapway's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Swimlane


closed
This job post is closed and the position is probably filled. Please do not apply.
\nSwimlane is looking for a NoSQL engineer with C# experience to join our team. Our product enables Federal and Fortune 100 companies to do business intelligence on big data and implement workflow procedure tasks around that data.. We are looking for a software engineer to help build the next generation security management application.  This is a new, not legacy, product the technology stack is latest and greatest; you will learn and use groundbreaking technologies!  You will have the ability to work from home, work on open-source projects and have the opportunity write articles on isolated components of your work.


See more jobs at Swimlane

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

ScalingData


closed
This job post is closed and the position is probably filled. Please do not apply.
\nScalingData's build infrastructure engineering team builds and maintains our internal build, test, continuous integration, packaging, release, and software delivery systems and infrastructure. Engineers who are interested in devops, configuration management, build systems, and distributed systems will feel at home thinking about developer efficiency and productivity, simplifying multi-language builds, automated testing of complex distributed systems, and how customers want to consume and deploy complex distributed systems in modern data centers. The build infrastructure team is a critical part of the larger engineering team. Distributed systems are hard, but building the infrastructure to develop them is harder.\n\nBy building on big data technologies such as Hadoop, help us create the essential solution for identify and solving critical performance and compliance issues in data centers. \n\nSome of the technology we use:\n\n\n* Java, Go, C/C++\n\n* Hadoop, Solr, Kafka, Impala, Hive, Spark\n\n* AWS, Maven, Jenkins, Github, JIRA\n\n\n


See more jobs at ScalingData

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

CLARITY SOLUTION GROUP


closed

scala

 

api

 
This job post is closed and the position is probably filled. Please do not apply.
\nNote: For this position you will be required to travel throughout the country on a weekly basis.\n\nDOES WORKING FOR AN ORGANIZATION WITH THESE BENEFITS, APPEAL TO YOU?\n\n\n* Working on complex transformation programs across many clients and many industries\n\n* Unlimited paid time off\n\n* Competitive compensation which includes uncapped bonus potential based on individual contributions\n\n* Mentor program\n\n* Career development\n\n* Tremendous growth opportunities (company is growing at a rate of 35% or more annually)\n\n* Strong work/life balance\n\n* A smaller nimble organization that is easy to work with\n\n* Visibility to the leadership team on a daily basis\n\n* Be a part of an Elite Data & Analytics team\n\n\n\n\nIF YOU ANSWERED YES TO THE ABOVE ITEMS, KEEP READING!\n\nWe are looking for individuals with the ability to drive the architectural decision making process, who are experienced with leading teams of developers, but who are also capable and enthusiastic about implementing every aspect of an architecture themselves.\n\nOUR DATA ENGINEERS: \n\n\n* Are hands-on, self-directed engineers who enjoysworking in collaborative teams  \n\n* Are data transformation engineers\n\n\n\n* Design and develop highly scalable, end to end process to consume, integrate and analyze large volume, complex data from sources such as Hive, Flume and other APIs\n\n* Integrate datasets and flows using a variety of open source and best-in-class proprietary software\n\n\n\n* Work with business stakeholders and data SMEs to elicit requirements and develop real-time business metrics, analytical products and analytical insights\n\n* Profile and analyze complex and large datasets\n\n* Collaborate and validate implementation with other technical team members\n\n\n


See more jobs at CLARITY SOLUTION GROUP

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

The Shelf


closed
New York City
 
💰 $70k - $120k
This job post is closed and the position is probably filled. Please do not apply.
We’re looking for a data nerd with an engineering bent who enjoys incorporating messy and varied data sets into a clean and efficient data analysis pipeline. Wranging with data to find meaningful insights is what drives you to work every day. You are relentless in getting things done. You don’t need parental supervision (i.e., you don’t like to be micro-managed). You want to take ownership of the code/features you’re building. \n\nMust have:\nDeep understanding of CS fundamentals as well as distributed systems \n- At least 5 years of experience building production level software (Python, Django required)\n- At least 2 years in a big-data related role at a data-driven company\n- Continuous integration and deployment experience\n\nExperience: you should have experience fetching, processing, and analyzing data in Python:\n- Experience developing and maintaining the back-end of a data-driven web app\n- Extensive experience with web-scraping (deep knowledge of Selenium a plus)\n- Experience implementing a data collection and analysis pipeline, scaling up to larger data sets and optimizing as necessary\n- Experience working with (non-)relational databases, particularly MongoDB\n- (Not necessary but we’d love you if) Experience with general data mining (NLTK) and machine learning techniques \n- (Not necessary but we’d love you if) Understanding and experience maintaining & optimizing PostgresSQL database is a major plus\n\nOur Stack : \nPython + Django\nAmazon EC2, RDS (Postgres), Rackspace, RabbitMQ for messaging, Celery for queues \n\n#Salary and compensation\n$70,000 — $120,000/year\n \n\n#Equity\n0.5 - 3.0\n\n\n#Location\nNew York City


See more jobs at The Shelf

Visit The Shelf's website

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
267ms