This job post is closed and the position is probably filled. Please do not apply. Work for phData and want to re-open this job? Use the edit link in the email when you posted the job!
\nIf you're inspired by innovation, hard work and a passion for data, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of global and enterprise clients. \n\nAt phData, our proven success has skyrocketed the demand for our services, resulting in quality growth at our company headquarters conveniently located in Downtown Minneapolis and expanding throughout the US. Notably we've also been voted Best Company to Work For in Minneapolis for the last 2 years. \n\nAs the world’s largest pure-play Big Data services firm, our team includes Apache committers, Spark experts and the most knowledgeable Scala development team in the industry. phData has earned the trust of customers by demonstrating our mastery of Hadoop services and our commitment to excellence.\n\nIn addition to a phenomenal growth and learning opportunity, we offer competitive compensation and excellent perks including base salary, annual bonus, extensive training, paid Cloudera certifications - in addition to generous PTO and employee equity. \n\nAs a Solution Architect on our Big Data Consulting Team, your responsibilities will include: \n\n\n* Design, develop, and innovative Hadoop solutions; partner with our internal Infrastructure Architects and Data Engineers to build creative solutions to tough big data problems. \n\n\n\n* Determine the technical project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions. Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews\n\n\n\n* Work across a broad range of technologies – from infrastructure to applications – to ensure the ideal Hadoop solution is implemented and optimized\n\n\n\n* Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources\n\n\n\n\n\n* \n\nDesign and implement streaming, data lake, and analytics big data solutions\n\n\n* \n\nCreate and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines\n\n\n* \n\nSelect the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths\n\n\n* \n\nUtilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)\n\n\n* \n\nPartner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software\n\n\n* \n\nDetermine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala\n\n\n* Local Candidates work between client site and office (Minneapolis). Remote US must be willing to travel 20% for training and project kick-off.\n\n\n\n\nTechnical Leadership Qualifications\n\n\n* \n\n5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics\n\n\n* \n\nExpertise in core Hadoop technologies including HDFS, Hive and YARN. \n\n\n* \n\nDeep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc\n\n\n* \n\nExpert programming experience in Java, Scala, or other statically typed programming language\n\n\n* \n\nAbility to learn new technologies in a quickly changing field\n\n\n* \n\nStrong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries\n\n\n* \n\nExcellent communication skills including proven experience working with key stakeholders and customers\n\n\n\n\n\nLeadership\n\n\n* \n\nAbility to translate “big picture” business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics\n\n\n* \n\nExperience scoping activities on large scale, complex technology infrastructure projects\n\n\n* \n\nCustomer relationship management including project escalations, and participating in executive steering meetings\n\n\n* Coaching and mentoring data or software engineers \n\n\n \n\n#Salary and compensation\n
No salary data published by company so we estimated salary based on similar jobs related to Architecture, Cloud, Scala, Travel, Engineer and Apache jobs that are similar:\n\n
$80,000 — $120,000/year\n
\n\n#Benefits\n
๐ฐ 401(k)\n\n๐ Distributed team\n\nโฐ Async\n\n๐ค Vision insurance\n\n๐ฆท Dental insurance\n\n๐ Medical insurance\n\n๐ Unlimited vacation\n\n๐ Paid time off\n\n๐ 4 day workweek\n\n๐ฐ 401k matching\n\n๐ Company retreats\n\n๐ฌ Coworking budget\n\n๐ Learning budget\n\n๐ช Free gym membership\n\n๐ง Mental wellness budget\n\n๐ฅ Home office budget\n\n๐ฅง Pay in crypto\n\n๐ฅธ Pseudonymous\n\n๐ฐ Profit sharing\n\n๐ฐ Equity compensation\n\nโฌ๏ธ No whiteboard interview\n\n๐ No monitoring system\n\n๐ซ No politics at work\n\n๐ We hire old (and young)\n\n
# How do you apply?\n\nThis job post has been closed by the poster, which means they probably have enough applicants now. Please do not apply.