👉 Hiring for a remote Admin position?on the 🏆 #1 remote jobs board
Head Of Sysadmin
Head Of Sysadmin
Stats (beta): 👁 1,029 views,✍️ 0 applied (0%)
\nScrapinghub is looking for a senior systems engineer to join the team as Head of Sysadmin. This role will be responsible for the successful operations and scaling of the infrastructure and software that powers crawls of over 2 billion pages a month.\n\nOur infrastructure stack includes Ubuntu, Python, Django, MySQL, HBase, Docker, LXC, AWS, along with our own technologies, such as Scrapy, Crawlera and Hubstorage.\n\nFounded by the creators of Scrapy, Scrapinghub helps companies turn web content into useful data with a cloud-based web crawling platform, off-the-shelf datasets, and turn-key web scraping services.\n\nJoin us in making the world a better place for web crawler developers and data scientists with top talented engineers working remotely from more than 30 countries.\n\nYour key responsibilities will be to:\n\n\n* Oversee design, deployment and management of our global infrastructure\n\n* Help identify, debug and fix problems arising on Scrapinghub’s platform, leveraging the work with both sysadmin and platform team members\n\n* Organize the sysadmin team’s work and delegate tasks according to members skillset\n\n* Help new members onboarding (by writing guides and direct mentoring)\n\n* Write tools and scripts to provide automation and self service solutions for ourselves and other teams\n\n* Design new systems to support production services\n\n* Creatively solve scale challenges regarding a rapidly expanding cloud environment\n\n* Help improve monitoring and identify key performance metrics\n\n* Proactive R&D - discovering and implementing new tools, emerging technology, etc.\n\n* Disaster recovery design, implementation, and maintenance\n\n* Troubleshooting and resolution of server/network issues\n\n\n\n\nA few examples of things you’ll do:\n\n\n* Migration of Cloudera Distribution for Hadoop (CDH) from version 4 to version 5 and the 50+ TB of data stored inside it, with minimal downtime\n\n* Building and optimizing a Elasticsearch+Logstash+Kibana stack for our development team to monitor and analyze production system usage\n\n* Design and implement a continuous integration and deployment system based on Docker, Mesos and an automatically configured http load balancer able to reroute traffic in case application containers die\n\n* Automate servers setup to scale to +300 servers on cloud providers and bare metal, be ready to replace hardware at any time without service outage\n\n* Setup and optimize a high available multi master MysqlDB and RabbitMQ cluster\n\n\n
See more jobs at Scrapinghub
# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.Apply for this Job
👉 Please reference you found the job on Remote OK, this helps us get more companies to post here!