Apple Solutions Architect in Santa Clara Valley, California

Solutions Architect

Job Number: 113577918

Santa Clara Valley, California, United States

Posted: 12-Mar-2018

Weekly Hours: 40.00

Job Summary

At Apple, great ideas have a way of becoming great products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Would you like to work in a fast-paced environment where your technical abilities will be challenged on a day-to-day basis? If you are passionate about building end-to-end large scale data solutions, Apple Global Business Intelligence team is looking for a seasoned Data Warehouse Solutions Architect with a deep understanding of ETL and Data modeling concepts.

Apple's Enterprise Data Warehouse landscape caters to a wide variety of real-time, near real-time and batch analytical solutions. These solutions are integral part of business functions like Sales, Operations, Finance, Apple Care, Marketing and Internet Services, enabling business drivers to make critical decisions.

In this role, you will be part of large development team designing and building systems across a diverse technology stacks such as Teradata, Hana, Vertica, Hadoop, Kafka, Spark, Cassandra and beyond. You will define standards, best practices and help drive adoption of our latest frameworks. You will be directly responsible and accountable for critical data solutions across various business functions.

Key Qualifications

  • In-depth understanding of data structures, algorithms and end-to-end solutions design

  • Has experience in managing and processing large data sets distributed on multi-server, distributed systems from inception to execution. Experience with databases like Oracle, Teradata, Vertica, Hadoop

  • Experience in designing and building dimensional data models to improve accessibility, efficiency, and quality of data

  • Programming experience in building high quality software. Skills with Java, Python or Scala preferred

  • Experience in designing and developing ETL data pipelines. Should be proficient in writing Advanced SQLs, Expertise in performance tuning of SQLs

  • Expert knowledge of distributed computing, parallel programming, concurrency control, transaction processing.

  • Demonstrate strong understanding of development processes and agile methodologies

  • Strong analytical and communication skills

  • Self-driven, highly motivated and ability to learn quick * Big Data/Hadoop ecosystem programming experience highly desirable, especially using java, Spark, hive, oozie, Kafka, and Map Reduce

  • Experience with or advance courses on data science and machine learning is a plus

  • Work/project experience with big data and advanced programming languages is a plusDescription

  • Drive, design and develop data processing pipelines, applications and tools, which promote product stability, reliability and maintainability

  • Design and build data structures on MPP platform like Teradata or Hadoop, to provide efficient reporting and analytics capability

  • Design and build highly scalable data pipelines using new generation tools and technologies like Spark and Kafka

  • Lead innovative efforts in processing data accurately, at scale

  • Build, deploy and support production services in distributed environments

  • Mentor other developers, define standards, best practices and help drive adoption

  • Benchmark application performance and continue to tune and scale to accommodate growth


Bachelor’s Degree

Additional Requirements