Principal Engineer I - Data LogisticsApply Now Date posted 12/03/2018 Requisition Number: 231368BR Location: United States - Colorado - Greenwood Village Zip Code: 80111 Area of Interest: Engineering/Technical Operations, Information Technology Services, Telecommunications Position Type: Full Time
The Principle Engineer I is responsible for creating and assisting in design, development, and implementation of the enterprise-wide Hadoop/Kafka/Splunk architectures, both hardware and software applications as well as their associated operating systems and databases. The individual acting in this role is also responsible for mentoring other Platform team members as a subject matter expert on multiple platform related applications, though not in a people management capacity.
DUTIES AND RESPONSIBILITIES
- Provide end to end network management and support for the enterprise Data Logistics and Splunk deployments
- Development and document standards, process and procedures for forethinking operational efficiencies on the Splunk network
- Provide cutting edge solutions for internal customer applications
- Working closely with other teams and collaborating with other team members
- Planning and implementing future developments on all OSS platform deployed applications
- 3rd level support and problem resolution for all major outage or complicated technical challenges that may arise on any OSS platform team related application
- Work closely with team management and principle engineering on all architectural decisions
- Work in a team environment and collaborative role with other members of the OSS platform team as well as selected vendors to provide state of the art long term, technical solutions
- As a Principle Engineer i, this role is not only to provide the highest level of expertise to the Data Logistics and Splunk environment but to offer your skills to other team members in a mentoring, SME role.
- Provide guidance on technical decisions for support
- Provide team members with training on new applications and designs for their respective responsibilities
- Minimum of Ten (10) years of Systems Engineering experience
- Minimum of Five (5) years of experience working with Hadoop and large database repository environments
- Minimum of Five (5) years of hands-on experience in developing and supporting carrier grade database architectures
- Bachelor’s degree (BA/BS) from four-year college or university; or equivalent training, education, and work experience
- Experience working with the following technologies: Splunk, Unix/Linux [RedHat/CentOS], Kafka, Python, PERL, SQL, IP networking protocols, Chef, Puppet, Ansible
- Experience with one or more of the following applications: Network routing, network switching, DNS, DHCP, RADIUS, LDAP, load balancing, disaster recovery, scaling and sizing for network bandwidth utilization
- Experience working within Telecommunications and cable industry
- Certification in Splunk applications: Splunk architect/admin/power user
- Certification in Hadoop/Cloudera
- Oracle or database admin certification
- Vendor related network certifications
- Graduate degree or high level of technical certifications preferred.
The Spectrum brands (including Spectrum Networks, Spectrum Enterprise and Spectrum Reach) are powered and innovated by Charter Communications. Charter Communications reaffirms its commitment to providing equal opportunities for employment and advancement to qualified employees and applicants. Individuals will be considered for positions for which they meet the minimum qualifications and are able to perform without regard to race, color, gender, age, religion, disability, national origin, veteran status, sexual orientation, gender identity, or any other basis protected by federal, state or local laws.
A Day in the Life
Curious about a typical day as a Spectrum employee? Check out these stories of how our people spend their day. Hear stories and watch original video featuring members of our team.Go