Posted over 1 year ago
- Provide expertise on the product deployments and be responsible for supporting and maintaining full stack applications.
- Design and author production quality technical documentation and reference materials to facilitate training.
- Plan, coordinate and execute product upgrades and patch set updates.
- Understand system designs and analyze and implement products in Big Data environment.
- Work on the Hadoop ecosystem, cloud environment AWS, Microsoft Azure, and cloud IaaS providers (AWS EC2, Azure, Google).
- Use automation tools such as Ansible, Chef or Puppet.
- Perform Spark configuration and performance and system integrations.
- Troubleshooting Linux, docket container and Hadoop cluster system performance.
- Analyze workload on the system, recognize problem areas, and proactively intervene where necessary.
- Understand management priorities and deliver every given task with sense of responsibility and dedication.
- Research and document product system features and usage for standards recording.
- Product related troubleshooting and analysis working with client teams.
- Create dashboards and reports with emphasis on visualization, design and usability experience.
- Work independently as well as in a team environment on moderate to highly complex issues.
- Bachelor’s degree in Computer Science, Engineering, or related;
- 5 years of experience as a Software Engineer or related occupation;
- 5 years of experience with Linux and Networking Skills;
- 5 years of hands-on experience with at least one programming language and Hadoop; and
- 5 years of experience on SQL or NO SQL database.
Experience may be gained concurrently.