All below skills are must have skills:
Experience in Java, python and handling highly scalable systems would be beneficial, having a database with knowledge of SQL,NO SQL and spark is a plus. We manage service and deployments dodging and building high performance components with strong foundation in software engineering is recommended.
- Ability to write robust code in one or more of Python, Go, and Java
- Proficient in core technologies like Spark, Hadoop and Hive.
- Experience in building real-time applications, preferably in Spark and streaming platforms like Kafka and Kinesis.
- Good understanding of machine learning pipelines and machine learning frameworks such as TensorFlow and PyTorch. (good to have)
- Familiar with cloud services like AWS, Azure, and workflow orchestration tools (e.g., Airflow).
- Degree with a strong technical focus (Computer Science, Engineering).
- Design, develop, debug, and modify components of machine learning and deep learning systems and applications, including data/ETL and feature engineering pipelines.
- Work collaboratively with data scientists, machine learning engineers, program and product managers in the development of assigned components.
- Actively participate in group technology reviews to critique work of self and others