FeedbackIf you find a bug, or have feedback, put it here. Please no job applications in here, click Apply on the job instead.Thanks for the message! We will get back to you soon.

[Spam check] What is the name of Elon Musk's company going to Mars?

Send feedback
Open Startup
RSS
API
Health InsurancePost a job

find a remote job
work from anywhere

πŸ‘‰ Hiring for a Remote position?

Post a job
on the πŸ† #1 Remote Jobs board

Remote Health by SafetyWing


Global health insurance for freelancers & remote workers

USA TODAY NETWORK


closed

engineer

This job post is closed and the position is probably filled. Please do not apply.
\nThe API Services team is responsible for engineering and delivering cutting-edge services to aide in content delivery to end customers. These services support 110 news brands, and more than 110 million unique monthly visitors.\n\nThe Principal Data Engineer will play a key role in architecting, developing and maintaining the data architecture for Gannett's new Content Platform that supports the content production & delivery systems that are consumed by both our network of 3000 journalists & our customer facing products. You will be expected to design & consume large scale, fault tolerant and highly available architectures. A large part of your role will be forward looking, with an emphasis on optimizing content structures & relationships.If you have a passion for rapid development, automation, learning, challenging and bettering your peers, with a strong desire to operate in a full stack environment, you'd probably fit in well here.\n\nResponsibilities:\n\n\n* Collaborate with stakeholders & developers to identify data needs & ideal implementation.\n\n* Contribute to the architecture and vision of Gannett's content data pipeline.\n\n* Track record of evolving complex data environments.\n\n* Continuously evaluate data usage patterns and identify areas of improvement.\n\n* Interface closely with data scientists and engineering to ensure reliability and scalability of data environment.\n\n* Drive future state technologies, designs and ideas across the organization.\n\n* Provide planning for two-week sprints.\n\n* Provide day to day operational support for our applications.\n\n* Improve and establish best practice around our application and infrastructure monitoring.\n\n\n\n\nAutomate everything:\n\n\n* Containerizing applications with Docker\n\n* Scripting new solutions/APIs/services to reduce toil\n\n* Research new tools to optimize cost, deployment speed and resource usage\n\n* Assist in improving our onboarding structure and documentation.\n\n\n\n\nResponsibility Breakdown:\n\n\n* 30% - Data architecture design / review\n\n* 20% - Mentoring\n\n* 15% - Application Support\n\n* 15% - Planning / Documentation\n\n* 10% - Design applications / recommendations / poc\n\n* 10% - New Technology Evaluation\n\n\n\n\nTechnologies:\n\nSystems:\n\n\n* Linux\n\n* Couchbase\n\n* Elastic Search\n\n* Solr\n\n* Neo4j\n\n* Other NoSQL Databases\n\n\n\n\nExciting things you get to do:\n\n\n* Engineering high-performant applications with an emphasis on concurrency\n\n* Agile\n\n* Amazon Web Services, Google Compute Engine\n\n* Google DataStore, Spanner, DynamoDB\n\n* Docker, Kubernetes\n\n* Database testing\n\n* GraphQL\n\n* Fastly\n\n* Terraform\n\n* Monitoring with NewRelic\n\n\n\n\nMinimum Qualifications:\n\n\n* Deep experience in ETL design, schema design and dimensional data modeling.\n\n* Ability to match business requirements to technical ETL design and data infrastructure needs.\n\n* Experience using search technologies like Elasticsearch and Solr and designing the integration of search with a persistent data store.\n\n* Deep understanding of data normalization methodologies.\n\n* Deep understanding of both Relational and NoSQL databases.\n\n* Experience with data solutions like Hadoop, Teradata, Oracle.\n\n* Proven expertise with query languages such as SQL, T-SQL, NRQL, solr querying.\n\n* Self-Starter that can operate in a remote-friendly environment.\n\n* Experience with Agile (Scrum) and test driven development, continuous integration and version control (GIT).\n\n* Experience deploying to Cloud compute or container hosting.\n\n* Experience working with data modeling tools.\n\n* Basic understanding of REST APIs, SDKs and CLI toolsets.\n\n* Understanding of web technologies.\n\n* Experience with Data in a media industry is a plus.\n\n\n


See more jobs at USA TODAY NETWORK

# How do you apply?\n\n This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
104ms