What We're Building And Why We're Building It.
There's a reason Fetch is ranked top 10 in Shopping in the App Store. Every day, millions of people earn Fetch Points buying brands they love. From the grocery aisle to the drive-through, Fetch makes saving money fun. We're more than just a build-first tech unicorn. We're a revolutionary shopping platform where brands and consumers come together for a loyalty-driving, points-exploding, money-saving party.
Join a fast-growing, founder-led technology company that's still only in its early innings. Ranked one of America's Best Startup Employers by Forbes two years in a row, Fetch is building a people-first culture rooted in trust and accountability. How do we do it? By empowering employees to think big, challenge ideas, and find new ways to bring the fun to Fetch. So what are you waiting for? Apply to join our rocketship today!
Fetch is an equal employment opportunity employer.
The ML Engineering team embodies these values and works with a laser focused objective to enable intelligent systems for end users, internal stakeholders, and external partners. We are looking for a
Machine Learning Engineer Apprentice to contribute to this vision and reap the rewards of joining an exciting company in the high growth phase. Among other things, Fetch uses multiple ML models to power every scan in the app (millions a day and growing), fight against fraudulent behavior, and drive recommendations for users. Machine learning is core to our product and we're working to make it an even bigger part of the company.
Your focus will be on the intersection of training ML models and deploying them to production. MLE's at Fetch are responsible for the full cycle of machine learning on a team. This includes managing/cleaning/piping data, training models for iterative improvements, and deploying those models to production. This will be done in collaboration with backend engineers and data scientists on a product team. You'll be expected to create value in a fast moving environment and that might mean at any given moment deep diving into one of these stages of the pipeline.
Are you capable of training and deploying a Transformer model but know when a simpler solution will do? Do you like knowing how model architectures translate to flops and the milliseconds off a server? Have you lost entire days debugging inscrutable CUDA errors? If you answered yes to these questions we'd love to hear from you.
Technical Skills:
- Excellent programming skills (we use a lot of Python in this problem space but proficiency in other languages are equally welcome)
- Experience training ML models using a Python framework like Pytorch, TF, etc.
- Experience deploying a model in a production environment with significant traffic. We process 100s of events per second in our prod pipelines
- Experience working inside of a codebase that is meaningful in size (minimum thousand lines) over an extended period of time
Bonus Points For:
- Excellent written and verbal communication skills
- Ability to problem solve independently and demonstrate initiative
- Deep PyTorch/Tensorflow expertise
- Experience with hosting platforms like TFServing/TorchServe/Triton
- Comfort with streaming data and Kafka
- Experience deploying applications in public cloud environments like AWS etc.