Remote Big Data Jobs Open Startup
RSS
API
Remote HealthPost a job

find a remote job
work from anywhere

Browse 14+ Remote Big Data Jobs in April 2021 at companies like Shopify, Very Large and Empirico working as a Senior Data Engineer, Senior Data Scientist or Staff Data Scientist. Last post

Join 92,139+ people and get a  email of all new remote Big Data jobs

Subscribe
×

  Jobs

  People

👉 Hiring for a remote Big Data position?

Post a job
on the 🏆 #1 remote jobs board
Remote Health by SafetyWing
Global health insurance for freelancers & remote workers
Remote Health by SafetyWing
Global health insurance for freelancers & remote workers
Advertise here

This week's remote Big Data jobs

Shopify


verified
Canada, United States

Staff Data Scientist


Shopify

Canada, United States

data scientist

 

python

 

big data

 

object oriented programming

 

data scientist

 

python

 

big data

 

object oriented programming

 
**Company Description**\n\nData is a crucial part of Shopify’s mission to make commerce better for everyone. We organize and interpret petabytes of data to provide solutions for our merchants and stakeholders across the organization. From pipelines and schema design to machine learning products and decision support, data science at Shopify is a diverse role with many opportunities to positively impact our success.\nOur Data Scientists focus on pushing products and the business forward, with a focus on solving important problems rather than specific tools. We are looking for talented data scientists to help us better understand our merchants and buyers so we can help them on their journey.\n\n**Job Description**\n\nDo you get excited about all things Data? Are you looking for a role where you can see the tangible results of your work? If you're excited by solving hard, impactful problems and you have a passion for logistics then our Staff Data Scientist may be right for you.\n\n**Qualifications**\n\n* 7-10 years of commercial experience in a data science role solving high impact business problems\n* You have well built technical experience that inspires individual contributors creativity and innovation.\n* You have experience with product leadership and technical decision making.\n* You have been working closely with various levels of business stakeholders, from c-suite and down.\n* You can jump into the code on a deep level, but are also able to contribute to long term initiatives by mentoring your team.\n* Multiple work streams excites you, you are able to use ambiguity as an opportunity for high level thinking.\n* Experience creating data product strategies, data products, iterating after launch, and trying again.\n* Extensive experience using Python including a strong grasp of object oriented programming (OOP) fundamentals.\n\n**What would be great if you have:**\n\n* Previous experience using Spark.\n* Experience with statistical methods like regression, GLMs or experiment design and analysis.\n* Exposure to Tableau, QlikView, Mode, Matplotlib, or similar data visualization tools.\n\n**Additional information**\n\nIf you’re interested in helping us shape the future of commerce at Shopify, click the “Apply Now” button to submit your application. Please submit a resume and cover letter with your application. Make sure to tell us how you think you can make an impact at Shopify, and what drew you to the role.\n\n#Location\nCanada, United States


See more jobs at Shopify

# How do you apply?\n\n You can apply for the role here> https://smrtr.io/5mMwv
Apply for this position

Shopify


verified
United States, Canada

Senior Data Scientist


Shopify

United States, Canada

remote data science role

 

senior data scientist

 

data science

 

azure

 

remote data science role

 

senior data scientist

 

data science

 

azure

 
**Company Description**\n\nShopify is now permanently remote and working towards a future that is digital by default. Learn more about what this can mean for you.\n\nAt Shopify, we build products that help entrepreneurs around the world start and grow their business. We’re the world’s fastest growing commerce platform with over 1 million merchants in more than 175 different countries, with solutions from point-of-sale and online commerce to financial, shipping logistics and marketing.\n\n**Job Description**\n\nData is a crucial part of Shopify’s mission to make commerce better for everyone. We organize and interpret petabytes of data to provide solutions for our merchants and stakeholders across the organization. From pipelines and schema design to machine learning products and decision support, data science at Shopify is a diverse role with many opportunities to positively impact our success. \n\nOur Data Scientists focus on pushing products and the business forward, with a focus on solving important problems rather than specific tools. We are looking for talented data scientists to help us better understand our merchants and buyers so we can help them on their journey.\n\n**Responsibilities:**\n\n* Proactively identify and champion projects that solve complex problems across multiple domains\n* Partner closely with product, engineering and other business leaders to influence product and program decisions with data\n* Apply specialized skills and fundamental data science methods (e.g. regression, survival analysis, segmentation, experimentation, and machine learning when needed) to inform improvements to our business\n* Design and implement end-to-end data pipelines: work closely with stakeholders to build instrumentation and define dimensional models, tables or schemas that support business processes\n* Build actionable KPIs, production-quality dashboards, informative deep dives, and scalable data products\n* Influence leadership to drive more data-informed decisions\n* Define and advance best practices within data science and product teams\n\n**Qualifications**\n\n* 4-6 years of commercial experience as a Data Scientist solving high impact business problems\n* Extensive experience with Python and software engineering fundamentals\n* Experience with applied statistics and quantitative modelling (e.g. regression, survival analysis, segmentation, experimentation, and machine learning when needed)\n* Demonstrated ability to translate analytical insights into clear recommendations and effectively communicate them to technical and non-technical stakeholders\n* Curiosity about the problem domain and an analytical approach\n* Strong sense of ownership and growth mindset\n \n**Experience with one or more:**\n\n* Deep understanding of advanced SQL techniques\n* Expertise with statistical techniques and their applications in business\n* Masterful data storytelling and strategic thinking\n* Deep understanding of dimensional modelling and scaling ETL pipelines\n* Experience launching productionized machine learning models at scale\n* Extensive domain experience in e-commerce, marketing or SaaS\n\n**Additional information**\n\nAt Shopify, we are committed to building and fostering an environment where our employees feel included, valued, and heard. Our belief is that a strong commitment to diversity and inclusion enables us to truly make commerce better for everyone. We strongly encourage applications from Indigenous people, racialized people, people with disabilities, people from gender and sexually diverse communities and/or people with intersectional identities. Please take a look at our 2019 Sustainability Report to learn more about Shopify's commitments.\n\n#Location\nUnited States, Canada


See more jobs at Shopify

# How do you apply?\n\n Click here to apply => https://smrtr.io/5njyK
Apply for this position

Shopify


verified
Canada, United States

Senior Data Engineer


Shopify

Canada, United States

senior data engineer

 

data engineering

 

data platform engineering

 

spark

 

senior data engineer

 

data engineering

 

data platform engineering

 

spark

 
**Company Description**\n\nShopify is the leading omni-channel commerce platform. Merchants use Shopify to design, set up, and manage their stores across multiple sales channels, including mobile, web, social media, marketplaces, brick-and-mortar locations, and pop-up shops. The platform also provides merchants with a powerful back-office and a single view of their business, from payments to shipping. The Shopify platform was engineered for reliability and scale, making enterprise-level technology available to businesses of all sizes. \n\n**Job Description**\n\nOur Data Platform Engineering group builds and maintains the platform that delivers accessible data to power decision-making at Shopify for over a million merchants. We’re hiring high-impact developers across teams:\n\n* The Engine group organizes all merchant and Shopify data into our data lake in highly-optimized formats for fast query processing, and maintaining the security + quality of our datasets.\n* The Analytics group builds products that leverage the Engine primitives to deliver simple and useful products that power scalable transformation of data at Shopify in batch, or streaming, or for machine learning. This group is focused on making it really simple for our users to answer three questions: What happened in the past? What is happening now? And, what will happen in the future? \n* The Data Experiences group builds end-user experiences for experimentation, data discovery, and business intelligence reporting.\n* The Reliability group operates the data platform efficiently in a consistent and reliable manner. They build tools for other teams at Data Platform to leverage to encourage consistency and they champion reliability across the platform.\n\n**Qualifications**\n\nWhile our teams value specialized skills, they've also got a lot in common. We're looking for a(n): \n\n* High-energy self-starter with experience and passion for data and big data scale processing. You enjoy working in fast-paced environments and love making an impact. \n* Exceptional communicator with the ability to translate technical concepts into easy to understand language for our stakeholders. \n* Excitement for working with a remote team; you value collaborating on problems, asking questions, delivering feedback, and supporting others in their goals whether they are in your vicinity or entire cities apart.\n* Solid software engineer: experienced in building and maintaining systems at scale.\n\n**A Senior Data Developer at Shopify typically has 4-6 years of experience in one or more of the following areas:**\n\n* Working with the internals of a distributed compute engine (Spark, Presto, DBT, or Flink/Beam)\n* Query optimization, resource allocation and management, and data lake performance (Presto, SQL) \n* Cloud infrastructure (Google Cloud, Kubernetes, Terraform)\n* Security products and methods (Apache Ranger, Apache Knox, OAuth, IAM, Kerberos)\n* Deploying and scaling ML solutions using open-source frameworks (MLFlow, TFX, H2O, etc.)\n* Building full-stack applications (Ruby/Rails, React, TypeScript)\n* Background and practical experience in statistics and/or computational mathematics (Bayesian and Frequentist approaches, NumPy, PyMC3, etc.)\n* Modern Big-Data storage technologies (Iceberg, Hudi, Delta)\n\n**Additional information**\n\nAt Shopify, we are committed to building and fostering an environment where our employees feel included, valued, and heard. Our belief is that a strong commitment to diversity and inclusion enables us to truly make commerce better for everyone. We strongly encourage applications from Indigenous people, racialized people, people with disabilities, people from gender and sexually diverse communities and/or people with intersectional identities.\n\n\n\n#Location\nCanada, United States


See more jobs at Shopify

# How do you apply?\n\n Click here to apply: https://smrtr.io/5kRRR
Apply for this position

Previous remote Big Data jobs

Shopify

 

verified closed
United States, Canada

Staff Data Scientist  


Shopify

United States, Canada

ecommerce

 

data science

 

python

 

big data

 

ecommerce

 

data science

 

python

 

big data

 
This job post is closed and the position is probably filled. Please do not apply.
**Company Description**\n\nData is a crucial part of Shopify’s mission to make commerce better for everyone. We organize and interpret petabytes of data to provide solutions for our merchants and stakeholders across the organization. From pipelines and schema design to machine learning products and decision support, data science at Shopify is a diverse role with many opportunities to positively impact our success.\nOur Data Scientists focus on pushing products and the business forward, with a focus on solving important problems rather than specific tools. We are looking for talented data scientists to help us better understand our merchants and buyers so we can help them on their journey.\n\n**Job Description**\n\nDo you get excited about all things Data? Are you looking for a role where you can see the tangible results of your work? If you're excited by solving hard, impactful problems and you have a passion for logistics then our Staff Data Scientist may be right for you.\n\n**Qualifications**\n\n* 7-10 years of commercial experience in a data science role solving high impact business problems\n* You have well built technical experience that inspires individual contributors creativity and innovation.\n* You have experience with product leadership and technical decision making.\n* You have been working closely with various levels of business stakeholders, from c-suite and down.\n* You can jump into the code on a deep level, but are also able to contribute to long term initiatives by mentoring your team.\n* Multiple work streams excites you, you are able to use ambiguity as an opportunity for high level thinking.\n* Experience creating data product strategies, data products, iterating after launch, and trying again.\n* Extensive experience using Python including a strong grasp of object oriented programming (OOP) fundamentals.\n\n**What would be great if you have:**\n\n* Previous experience using Spark.\n* Experience with statistical methods like regression, GLMs or experiment design and analysis.\n* Exposure to Tableau, QlikView, Mode, Matplotlib, or similar data visualization tools.\n\n**Additional information**\n\nIf you’re interested in helping us shape the future of commerce at Shopify, click the “Apply Now” button to submit your application. Please submit a resume and cover letter with your application. Make sure to tell us how you think you can make an impact at Shopify, and what drew you to the role.\n\n#Location\nUnited States, Canada


See more jobs at Shopify

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply.
**Working at Clevertech**\n\nPeople do their best work when they’re cared for and in the right environment:\n* RemoteNative™: Pioneers in the industry, we are committed to remote work.\n* Flexibility: Wherever your are, and wherever you want to go. We embrace the freedom gained through trust and professionalism.\n* Team: Be part of an amazing team of senior engineers that you can rely on.\n* Growth: Become a master in the art of remote work and effective communication.\n* Compensation: Best in class compensation for remote workers plus the swag you want.\n* Cutting Edge: Stay sharp in your space, work at the very edge of tech.\n* Passion: Annual financial allowance for YOUR development and YOUR passions.\n\n**The Job**\n\n* 7+ years of professional experience (A technical assessment will be required)\n* Senior-level experience with Data Warehousing, Data Engineering\n* Experience evaluating and selecting a Cloud Data Warehouse - Redshift, Snowflake, or Synapse\n* You have accomplishments with data modeling, schemas, and ETL development to support business processes, ideally in the the construction industry\n* Clear communicator with expertise in management level presentation and documentation\n* English fluency, verbal and written\n* Professional, empathic, team player\n* Problem solver, proactive, go getter\n\n**Life at Clevertech**\n\nWe’re Clevertech, since 2000, we have been building technology through empowered individuals. As a team, we challenge in order to be of service, to deliver growth and drive business for our clients.\n\nOur team is made up of people that are not only from different countries, but also from diverse backgrounds and disciplines. A coordinated team of individuals that care, take on responsibility, and drive change.\n\nhttps://youtu.be/1OKhKatReyg\n\n**Getting Hired**\n\nInterested in exploring your future in this role and Clevertech? Set yourself up for success and take a look at our [Interview Process](https://www.clevertech.biz/thoughts/interviewing-with-clevertech) before getting started! \n\n#Location\nUS, Canada, Europe


See more jobs at Clevertech

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.
This job post is closed and the position is probably filled. Please do not apply.
**Working at Clevertech**\n\nPeople do their best work when they’re cared for and in the right environment:\n* RemoteNative™: Pioneers in the industry, we are committed to remote work.\n* Flexibility: Wherever your are, and wherever you want to go. We embrace the freedom gained through trust and professionalism.\n* Team: Be part of an amazing team of senior engineers that you can rely on.\n* Growth: Become a master in the art of remote work and effective communication.\n* Compensation: Best in class compensation for remote workers plus the swag you want.\n* Cutting Edge: Stay sharp in your space, work at the very edge of tech.\n* Passion: Annual financial allowance for YOUR development and YOUR passions.\n\n**The Job**\n\n* 7+ years of professional experience (A technical assessment will be required)\n* Senior-level experience developing and operating large-scale data pipelines with Apache Spark\n* Experience with Spark, Beam, Druid, Ignite\n* Experience with reporting, large datasets, and complex queries.\n* English fluency, verbal and written\n* Professional, empathic, team player\n* Problem solver, proactive, go getter\n\n**Life at Clevertech**\n\nWe’re Clevertech, since 2000, we have been building technology through empowered individuals. As a team, we challenge in order to be of service, to deliver growth and drive business for our clients.\nOur team is made up of people that are not only from different countries, but also from diverse backgrounds and disciplines. A coordinated team of individuals that care, take on responsibility, and drive change.\nhttps://youtu.be/1OKhKatReyg\n\n**Getting Hired**\n\nInterested in exploring your future in this role and Clevertech? Set yourself up for success and take a look at our [Interview Process](https://www.clevertech.biz/thoughts/interviewing-with-clevertech) before getting started! \n\n#Location\nUS, Canada, Europe


See more jobs at Clevertech

# How do you apply?\n\n This job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.

Andalus


closed
🌏 Worldwide

Big Data Systems Engineer


Andalus

🌏 Worldwide

kubernetes

 

spark

 

infrastructure

 

big data

 

kubernetes

 

spark

 

infrastructure

 

big data

 
This job post is closed and the position is probably filled. Please do not apply.
Andalus is a start-up aiming to utilize the latest technologies to help clients solve their data needs. We are hackers by nature and like to think of innovative solutions to solving issues that have long been considered status-quo. We aim to empower our clients with state-of-art infrastructure components allowing them to be a data-enabled organization.\n\nYou will be working first-hand on developing the first major release of our Andalus platform targeted to empower important organizations in the MENA region.\n\n\n**Responsibilities:\n**\n* Design, build, and maintain the core data computation infrastructure used by Andalus\n* Debug issues across services and levels of the stack\n* Think and suggest the right hardware specs and components that would run well with the developed software and adhere to clients’ requirements \n* Develop systems that proactively capture the health status of various software and hardware components.\n* Build a great customer experience for people using your infrastructure\n\n\n**Qualifications:\n**\n* Familiar with the latest computation and storage architecture and components\n* Experience with managing distributed systems within computing clusters\n* Experience in configuration and maintenance of applications such as web servers, load balancers, relational databases, storage systems and messaging systems.\n* Knowledge of system design concepts\n* Ability to debug complex problems across the whole stack\n* Demonstrated understanding of container networking and security\n* Comfort working with network protocols, proxies and load balancers\n* Experience building highly available services \n* Experience with Kubernetes or other container orchestration systems\n* Technical writing skills\n* Interest in or experience with systems languages, such as Go\n* Strong communication skills and willingness to engage with your teammates in group problem-solving in a remote-work environment\n\n\n**Preferred Qualifications:\n**\n* Knowledge of data governance tooling and principles\n* Working experience in organizations with large databases\n* Dealt with data integrity and cleaning issues in the past\n* 2+ years of experience as a tech lead\n\n\n#Location\n🌏 Worldwide


See more jobs at Andalus

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

very large


closed

Java Python Golang Backend Engineer Big Data Data Greenfield Partners


very large


big data

 

python

 

java

 

golang

 

big data

 

python

 

java

 

golang

 
This job post is closed and the position is probably filled. Please do not apply.
\nNEEDED FOR THIS ROLE\n\n\n* JAVA, PYTHON, Golang (intermediate+ to Expert -preferred) in one or more\n\n* NoSQL DB (Cassandra, etc or time series non structured DB experience)\n\n* Big Data and Data at very large scale\n\n* Experienced battle-hardened SW engineer (large distributed systems, large scale)\n\n\n\n\nThis is NOT an SRE role! \n\nThis is a software engineering role that will work on a team that provides ALL monitoring and will be responsible for developing custom stack for data integration retrieval. The team monitors time series data ingest in upwards of 1.5M+ records a min. \n\nMUST HAVE\n\n\n* Have the ability to develop code to access resident data and then digest and correlate data.\n\n* Experienced battle hardened SW engineer with distributed systems experience deploying large scale/implementing at large scale.\n\n* Solid programmer -knows one or more (Java, Python, Golang) and expert at one or more.\n\n\n\n\nTHEY ARE NOT looking for script writer\n\nIdeal candidate has experience with timeseries data store (e.g. Cassandra, etc.)\n\n\n* Expertise in NoSQL DB at a GIGA scale\n\n\n\n\n\n The SRE Monitoring Infrastructure team (Note this is NOT an SRE Role) is looking for a backend  software engineer with experience working with large-scale systems and an operational mindset to help scale our operational metrics platform. This is a fantastic opportunity to enable all engineers to monitor and keep our site up and running. In return, you will get to work with a world class team supporting a platform that serves Billions of metrics at Millions of QPS\n \nThe engineers  fill the mission-critical role of ensuring that our complex, web-scale systems are healthy, monitored, automated, and designed to scale. You will use your background as an operations generalist to work closely with our development teams from the early stages of design all the way through identifying and resolving production issues. The ideal candidate will be passionate about an operations role that involves deep knowledge of both the application and the product, and will also believe that automation is a key component to operating large-scale systems.\n \n Responsibilities:\n • Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services\n • Gain deep knowledge of our complex applications.\n • Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth.\n • Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment.\n • Work closely with development teams to ensure that platforms are designed with "operability" in mind.\n • Function well in a fast-paced, rapidly-changing environment.\n • Participate in a 24x7 rotation for second-tier escalations.\n \n Basic Qualifications:\n • B.S. or higher in Computer Science or other technical discipline, or related practical experience.\n • UNIX/Linux systems administration background.\n • Programming skills (Golang, Python)\n \n Preferred Qualifications:\n • 5+ years in a UNIX-based large-scale web operations role.\n • Golang and/or Python experience\n • Previous experience working with geographically-distributed coworkers.\n • Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, Engineers, Product Managers, etc.\n • Knowledge of most of these: data structures, relational and non-relational databases, networking, Linux internals, filesystems, web architecture, and related topics- basic knowledge\n\nTeam\n\n\n* Interact with 4-5 people (stand ups) but not true scrum\n\n* No interaction with outside teams\n\n\n\n\nCandidate workflow\n\n\n* 2 rounds\n\n* 1 technical coding\n\n* 1 team fit\n\n\n


See more jobs at very large

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Empirico

 

closed

Data Infrastructure Engineer Big Data Functional Programming Drug Discovery  


Empirico


big data

 

engineer

 

big data

 

engineer

 
This job post is closed and the position is probably filled. Please do not apply.
\nEmpirico, an early-stage biotechnology company, is looking for a talented software engineer that is motivated by the opportunity to build scalable data systems that power the discovery of new medicines. You will work closely with a team of engineers and computational scientists to build and extend Empirico’s data infrastructure, which include modern cloud-based systems and services that operate on some of the largest biological datasets in the world.\n\nResponsibilities:\n\nYour responsibilities will focus around designing and implementing robust and extensible data systems. You will be expected to:\n\n\n* \n\nDesign and implement scalable data infrastructure and pipelines\n\n\n* \n\nImplement scalable algorithms in a distributed systems setting\n\n\n* \n\nCollaborate closely with an interdisciplinary team of scientists and engineers to address\n\nsystem pain points\n\n\n* \n\nImprove developer efficiency and system quality through emphasis on elegant code\n\n\n* \n\nAdvocate for systems and engineering practice improvements\n\n\n\n\n\nRequirements:\n\n\n* \n\n2+ years professional experience designing and developing software on modern distributed data systems\n\n\n* \n\nExperience processing and analyzing large and heterogeneous datasets\n\n\n* \n\nStrong technical skill set that spans a broad range of technologies, programming languages,\n\nand paradigms\n\n\n* \n\nPassionate about systems thinking and drive towards elegant and automated solutions to\n\ndata problems\n\n\n* \n\nExperience with Spark and Scala or other functional programming language is a plus\n\n\n* \n\nApplicants must have authorization to work in the United States\n\n\n\n


See more jobs at Empirico

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Kubevisor


closed

Senior Big Data Engineer


Kubevisor


big data

 

senior

 

engineer

 

big data

 

senior

 

engineer

 
This job post is closed and the position is probably filled. Please do not apply.
\nDescription\n\nA successful candidate will be a person who enjoys diving deep into data, doing analysis, discovering root causes, and designing long-term solutions. It will be a person that wants to innovate in the world of cloud big data engineering and machine learning. Major responsibilities include:\n\n\n* Understand the client's business need and guide them to a solution using Apache Spark, Hadoop, Kubernetes, AWS etc.\n\n* Lead the customer projects by being able to deliver a Spark project on AWS from beginning to end, including understanding the business need, aggregating data, exploring data and deploying to AWS (EMR, S3, Step Functions etc.) to deliver business impact to the organization.\n\n\n\n\n \nBasic Qualifications\n\n\n\n\n* Degree in computer science or a similar field\n\n* 5+ years work experience in big data engineering\n\n* Experience in managing and processing data in a data lake\n\n* Able to use a major programming, preferably Scala/Python (preference in that order) on Spark to process data at a massive scale\n\n* Experience working with a wide range of big data tools, especially Spark on AWS\n\n\n\n\n\nPreferred Qualifications \n\n\n\n\n* Experience with processing large datasets with Spark on AWS\n\n* Experience in data modelling, ETL development, and data warehousing.\n\n* Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets\n\n* Experience developing software projects\n\n* Experience using Linux to process large data sets\n\n* Combination of deep technical skills and business savvy enough to interface with all levels and disciplines within our client's organization\n\n* Demonstrable track record of dealing well with ambiguity, prioritizing needs, and delivering results in a dynamic environment\n\n\n\n\nIdeally you are in the GMT to GMT+4 timezone.


See more jobs at Kubevisor

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

Mission Focus


closed

Big Data Developers


Mission Focus


big data

 

big data

 
This job post is closed and the position is probably filled. Please do not apply.
\nMission Focus works closely with the Department of Defense Intelligence Community to create Big Data solutions.  If you want to be on the leading edge of technology, and you love to code, Mission Focus is where you want to be.\n\nWe:\n\n\n* Are a small but mighty team of professional software engineers\n\n* Employ an agile, test-driven development methodology\n\n* Avoid crunch-time by managing our code and ourselves well\n\n* Are building something magnificent\n\n\n\n\nYou:\n\n\n* Focus and get things done\n\n* Are eager to learn and happy to teach\n\n* Dream in code, think in abstractions, draw in UML\n\n* Aspire to write code that will open a door to new science\n\n\n\n\nResponsibilities:\n\n\n* Think deeply while crafting clean code that works\n\n* Distill dense technical information into operational knowledge\n\n* Take your turn as build master, doc master, cluster master, data master, test master\n\n* Become a somebody in the world of renowned software engineers\n\n\n\n\nRequirements:\n\n\n* U.S. citizen with existing security clearance, or clearable\n\n* Work on-site with the team in Old Town Alexandria\n\n* Solid curly-bracket software development experience\n\n\n\n\nDesirable Experience:\n\n\n* JavaScript, Ember, Sass\n\n* Clojure, Compojure, Lisp\n\n* Git, Jenkins, Maven, Leiningen, Node\n\n* DevOps, *-IX, Mac OS\n\n* Perl, shell scripting\n\n* NoSQL compute and storage technologies\n\n* Information visualization, UI / UX design\n\n* Natural language / image / motion imagery processing\n\n* Semiotics, knowledge representation, data modeling, semantic technologies\n\n* Advanced physics, mathematics, statistical inference, artificial intelligence\n\n\n\n\nAspiring professional software engineers only please. Project managers, start-up entrepreneurs, and business development types need not apply.\n\nAbout Mission Focus\n\nMission Focus is an agile development shop that takes domain design and development as seriously as system design and development. We work mostly in the Intelligence arena with DoD and IC customers in close partnership with the Institute for Modern Intelligence. Our coredomain is the storage, processing, and utilization of data in the context of immense scale and diversity. We are experts in cloud compute and storage technology and invented the Sign Representation Framework which underpins a game-changing approach to data unification. We pride ourselves on our disciplined engineering practices and distinguish ourselves by our ability to continually learn, innovate, and deliver. The work we do is meaningful, intentional, and wrapped with our integrity. We are driven to think harder and work better than the rest because we believe the code we are writing will change the world.


See more jobs at Mission Focus

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

BairesDev


closed

Big Data Engineer


BairesDev


big data

 

engineer

 

big data

 

engineer

 
This job post is closed and the position is probably filled. Please do not apply.
\nAt BairesDev (Glassdoor Employee Score: 4.3), we are proud of being one of the fastest-growing companies in the industry because we don't sacrifice quality. With more than 1300 collaborators, and providing talent to companies such as Google, Pinterest and Udemy, we continue to rapidly add talent to our multicultural team who will help us get to the next level.\n\n\nBig Data Engineers will face numerous business-impacting challenges, so they must be ready to use state of the art technologies and be familiar with different IT domains such as Machine Learning, Data Analysis, Mobile, Web, IoT, etc. They are passionate, active members of our community who enjoy sharing knowledge, challenging, and being challenged by others and are truly committed to improving themselves and those around them.\n\n\nMain Activities:\n\n- Work alongside Developers, Tech Leads, and Architects to build solutions that transform users’ experience.\n- Impact the core of business by improving existing architecture or creating new ones.\n- Create scalable and high availability solutions, contribute to the key differential of each client.\n\n\nWhat Are We Looking For:\n\n- 6+ years of experience working as a Developer (Ruby, Python, Java, JS, preferred).\n- 5+ years of experience in Big Data (Comfortable with enterprise Big Data topics such as Governance, Metadata Management, Data Lineage, Impact Analysis, and Policy Enforcement).\n- Proficient in analysis, troubleshooting, and problem-solving.\n- Experience building data pipelines to handle large volumes of data (either leveraging well-known tools or custom made ones).\n- Advanced English is mandatory.\n \nProficiency in the following topics are highly appreciated:\n\n- Building Data Lakes with Lambda/Kappa/Delta architecture.\n- DataOps, particularly creating and managing processes for batch and real-time data ingestion and processing.\n- Hands-on experience with managing data loads and data quality.\n- Modernizing enterprise data warehouses and business intelligence environments with open source tools.\n- Deploying Big Data solutions to the cloud (Cloudera, AWS, GCP, or Azure).\n- Performing real-time data visualization and time series analysis using both open source and commercial solutions.\n\n\nWe offer:\n\n- 100% remote / work-from-home flexible schedules.\n- Excellent compensation.\n- Multiple opportunities to learn and grow in a people-first environment.\n- Warming company culture.\n- Clients interested in what you have to say, eager to hear your opinions and mostly in working together towards building something great.\n\n\nApply now and become part of this fantastic startup. At BairesDev, remote work is at our core. Enjoy the opportunity to have a dynamic lifestyle, better health, and wellness. Find renewed passion in your job, improve your productivity, and benefit from attractive growth opportunities for your career.


See more jobs at BairesDev

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.

VividCortex Database Performance Monitoring


closed

Senior Big Data Scalability Engineer


VividCortex Database Performance Monitoring


big data

 

senior

 

engineer

 

big data

 

senior

 

engineer

 
This job post is closed and the position is probably filled. Please do not apply.
*Only candidates residing inside of the United States will be considered for this role*\n\nAbout VividCortex\n\nAre you excited by designing and developing a high volume, highly available, and highly scalable SaaS product in AWS that supports Fortune 500 companies? Do you love the energy and excitement of being a part of growing and successful startup company? Are you passionate about diving deep into technologies such as microservices/APIs, database design, and data architecture?\n\nVividCortex, founded in 2012, is building a world-class company with a mixed discipline team to provide incredible value for our customers. Hundreds of industry leaders like GitHub, SendGrid, Etsy, Yelp, Shopify, and DraftKings rely on VividCortex. Our company’s growth continues to accelerate (#673 Inc. 5000) for yet another year so we need your help.\n\nWe are extremely customer focused, engaged in building an authentic, low-drama team that is open, candid, sincerely practicing ‘disagree and commit’, constantly learning and improving, and with a focused, get-it-done attitude about our commitments.\n\nA successful candidate thrives in a highly collaborative and fast-paced environment. We expect and encourage innovation, responsibility, and accountability from our team members and expect you to make substantial contributions to the architectural and technical direction of both the product and company.\n\nAbout the Role\nVividCortex needs an experienced and senior hands-on data and software engineer who has “been there and done that” to help take our company to the next level. We are designing and building our next-generation system for continuous high volume data storage, analysis, and presentation. You are hands-on and working at the intersection of data, engineering, and product. You are key in defining the strategy and tactics of how we store and process massive amounts of performance metrics and other data we capture from our customers' database servers.\n\nOur platform is written in Go and hosted entirely on the AWS cloud. It currently uses Kafka, Redis, and MySQL technologies among others. We are a DevOps organization building a 12-factor microservices application; we practice small, fast cycles of rapid improvement and full exposure to the entire infrastructure, but we don't take anything to extremes.\n\nThe position offers excellent benefits, a competitive base salary, and the opportunity for equity. Diversity is important to us, and we welcome and encourage applicants from all walks of life and all backgrounds.\n\n\n\n\nWhat You Will Be Doing\n\n\n\n\n* Discover, define, design, document and assist in developing scalable backend storage and robust data pipelines for different types of data streams of both structured and unstructured data in an AWS environment and based on Linux and Golang\n\n* Work with others to define, and propose for approval, a modern data platform design strategy and matching architecture and technology choices to support it, with the goal of providing a highly scalable, economical, observable, and operable data platform for storing and processing very large amounts of data within tight performance tolerances.\n\n* Perform high-level strategy and hands-on infrastructure development for the VividCortex data platform, developing and deploying new data management services in AWS.\n\n* Collaborate with engineering management to drive data systems design, deployment strategies, scalability, infrastructure efficiency, monitoring, and security.\n\n* Write code, tests, and deployment manifests and artifacts.\n\n* Work with CircleCI and GitHub in a Linux environment.\n\n* Issue pull requests, create issues, and participate in code reviews and approval.\n\n* Continually seek to understand, measure, and improve performance, reliability, resilience, scalability, and automation of the system. Our goal is that systems should scale linearly with customer growth, and the effort of maintaining the systems should scale sub-linearly.\n\n* Support product management in prioritizing and coordinating work on changes and serve as a lead in creating user-focused technical requirements and analysis\n\n* Assist with customer support, sales, and other activities as needed. \n\n* Understand and enact our security posture and practices.\n\n* Rotate through on-call duty.\n\n* Contribute to a culture of continuous learning and clear responsibility and accountability.\n\n* Manage your workload, collaborating and working independently as needed, keeping management appropriately informed of progress and issues.\n\n\n\n\n\n\n\n\n\n\nBasic Qualifications:\n\n\n\n\n* Experience developing and extending a SaaS multi-tenant application\n\n* Domain expert in scalable, highly available data storage, scaling, organization, formats, security, reliability, etc.\n\n* Capable of deep technical understanding and discussion of databases, software and service design, systems, and storage\n\n* 10+ years of experience in distributed software systems design and development\n\n* 7+ years of experience programming in Golang, Java, C#, or C\n\n* 7+ years of experience designing and hands-on implementation and maintenance of data pipelines at big data scale, employing a wide variety of big data technologies, as well as cleaning and organizing data to be reliable and usable\n\n* Experience designing highly complex data infrastructures and maintenance of same\n\n* Mastery of relational database concepts including a strong knowledge of SQL and of technologies such as MySQL, Postgres\n\n* Experience with CI/CD, Git, and development in a Unix/Linux environment using the command line\n\n* Excellent written and verbal communication skills\n\n* Ability to understand and translate customer needs into leading-edge technology\n\n* Collaborative with a passion for highly effective teams and development processes\n\n\n\n\n\n\n\n\n\n\nPreferred Qualifications:\n\n\n\n\n* Master’s degree in Computer Science or equivalent work experience\n\n* Experience designing and deploying solutions with no-SQL technologies such as Mongo, DynamoDB\n\n* 3+ years of experience with AWS infrastructure development including experience with a variety of different ingestion technologies,, processing frameworks, storage engines and understand the tradeoffs between them\n\n* Experience with Linux systems administration and enterprise security\n\n\n\n\n


See more jobs at VividCortex Database Performance Monitoring

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
114ms