Remote Principal Data Engineer at USA TODAY NETWORK 📈 Open Startup
RSS
API
Post a Job

get a remote job
you can do anywhere

The largest collection of Remote Jobs for Digital Nomads online. Get a remote job you can do anywhere at Remote Companies like Buffer, Zapier and Automattic who embrace the future. There are 32,300+ jobs that allow you to work anywhere and live everywhere.

The largest collection of Remote Jobs for Digital Nomads online. Get a remote job you can do anywhere at Remote Companies like Buffer, Zapier and Automattic who embrace the future. There are 32,300+ jobs that allow you to work anywhere and live everywhere.

  Jobs

  People

👉 Hiring for a remote Engineer position?

Post a Job - $299
on the 🏆 #1 remote jobs board

USA TODAY NETWORK


Principal Data Engineer

Principal Data Engineer


USA TODAY NETWORK


engineer

engineer

1yr
\nThe API Services team is responsible for engineering and delivering cutting-edge services to aide in content delivery to end customers. These services support 110 news brands, and more than 110 million unique monthly visitors.\n\nThe Principal Data Engineer will play a key role in architecting, developing and maintaining the data architecture for Gannett's new Content Platform that supports the content production & delivery systems that are consumed by both our network of 3000 journalists & our customer facing products. You will be expected to design & consume large scale, fault tolerant and highly available architectures. A large part of your role will be forward looking, with an emphasis on optimizing content structures & relationships.If you have a passion for rapid development, automation, learning, challenging and bettering your peers, with a strong desire to operate in a full stack environment, you'd probably fit in well here.\n\nResponsibilities:\n\n\n* Collaborate with stakeholders & developers to identify data needs & ideal implementation.\n\n* Contribute to the architecture and vision of Gannett's content data pipeline.\n\n* Track record of evolving complex data environments.\n\n* Continuously evaluate data usage patterns and identify areas of improvement.\n\n* Interface closely with data scientists and engineering to ensure reliability and scalability of data environment.\n\n* Drive future state technologies, designs and ideas across the organization.\n\n* Provide planning for two-week sprints.\n\n* Provide day to day operational support for our applications.\n\n* Improve and establish best practice around our application and infrastructure monitoring.\n\n\n\n\nAutomate everything:\n\n\n* Containerizing applications with Docker\n\n* Scripting new solutions/APIs/services to reduce toil\n\n* Research new tools to optimize cost, deployment speed and resource usage\n\n* Assist in improving our onboarding structure and documentation.\n\n\n\n\nResponsibility Breakdown:\n\n\n* 30% - Data architecture design / review\n\n* 20% - Mentoring\n\n* 15% - Application Support\n\n* 15% - Planning / Documentation\n\n* 10% - Design applications / recommendations / poc\n\n* 10% - New Technology Evaluation\n\n\n\n\nTechnologies:\n\nSystems:\n\n\n* Linux\n\n* Couchbase\n\n* Elastic Search\n\n* Solr\n\n* Neo4j\n\n* Other NoSQL Databases\n\n\n\n\nExciting things you get to do:\n\n\n* Engineering high-performant applications with an emphasis on concurrency\n\n* Agile\n\n* Amazon Web Services, Google Compute Engine\n\n* Google DataStore, Spanner, DynamoDB\n\n* Docker, Kubernetes\n\n* Database testing\n\n* GraphQL\n\n* Fastly\n\n* Terraform\n\n* Monitoring with NewRelic\n\n\n\n\nMinimum Qualifications:\n\n\n* Deep experience in ETL design, schema design and dimensional data modeling.\n\n* Ability to match business requirements to technical ETL design and data infrastructure needs.\n\n* Experience using search technologies like Elasticsearch and Solr and designing the integration of search with a persistent data store.\n\n* Deep understanding of data normalization methodologies.\n\n* Deep understanding of both Relational and NoSQL databases.\n\n* Experience with data solutions like Hadoop, Teradata, Oracle.\n\n* Proven expertise with query languages such as SQL, T-SQL, NRQL, solr querying.\n\n* Self-Starter that can operate in a remote-friendly environment.\n\n* Experience with Agile (Scrum) and test driven development, continuous integration and version control (GIT).\n\n* Experience deploying to Cloud compute or container hosting.\n\n* Experience working with data modeling tools.\n\n* Basic understanding of REST APIs, SDKs and CLI toolsets.\n\n* Understanding of web technologies.\n\n* Experience with Data in a media industry is a plus.\n\n\n

See more jobs at USA TODAY NETWORK

# How do you apply? This job post is older than 30 days and the position is probably filled. Try applying to jobs posted recently instead.
Apply for this Job

👉 Please reference you found the job on Remote OK, this helps us get more companies to post here!

When applying for jobs, you should NEVER have to pay to apply. That is a scam! Posts that link to pages with "how to work online" are also scams. Don't use them or pay for them. Also always verify you're actually talking to the company in the job post and not an imposter. Scams in remote work are rampant, be careful! When clicking on the button to apply above, you will leave Remote OK and go to the job application page for that company outside this site. Remote OK accepts no liability or responsibility as a consequence of any reliance upon information on there (external sites) or here.