The Tenstorrent team combines technologists from different disciplines who come together with a shared passion for AI and a deep desire to build great products. We value collaboration, curiosity, and a commitment to solving hard problems. Find out more about our culture.
At Tenstorrent, we are developing silicon, software, and systems to address the rapidly growing demands of AI and machine learning workloads. We have taped out three generations of chips and continuously developing new ones.
Our chips are parallel dataflow processors with hundreds of AI-specialized engines, a high-throughput interconnect among the engines, and controllers of high-speed off-chip memory and communication peripherals. Each engine consists of a variable-precision math unit and multiple RISC-V cores.
Our SDK, consisting of a graph compiler, a runtime environment, runtime firmware for the engines, software that generates the runtime firmware, and supporting tools, provides a familiar environment to machine learning developers while hiding complexities of scheduling and low-level execution.
Our systems range from single-chip boards for inference to multi-rack computers with hundreds of chips for training.
We are growing our team and looking for software engineers of all seniorities.
\n
Responsibilities- In this role you will develop graph compiler part of the SDK for the AI/ML processor.
- Implement, in C++ and Python, neural network graph transformations and logic that schedules and executes neural network operations on our AI/ML processor and systems. Optimize for high performance, high resource utilization, low latency, and low power consumption.
- Develop tools to analyze and visualize performance, hardware utilization, placement, routing, and power consumption.
- Define and implement new APIs in our SDK using Python and C++, to meet latest needs of AI and machine learning application developers, as our customers.
- Optimize and train modern neural networks on our chips and systems, as a proof of concept and as part of collaborating with our customers.
- Define new fundamental operations on tensors of data, as well as other functionalities to be implemented in silicon and firmware of our AI/ML processor, to increase performance and reduce power of executing modern neural networks.
- Test and ensure functional correctness and high performance of the execution schedule generated by the graph compiler, from a single operation to a whole neural network running on hundreds of chips.
- Perform other duties assigned from time to time.
Experiences and qualifications- Degree or final year of education in Computer Science, Computer Engineering, Software Engineering, Electronics, Math, or a related field.
- Passion for programming and solid foundation in algorithms and data structures.
- Passion for neural networks and related deep-learning architectures.
- Experience and proficiency in one or more programming languages, but not limited to: Python and C++.
- Experience with various deep learning architectures including RNNs, transformers, convolutional is great to have.
- Experience with any of these is a plus: functional programming, parallel systems, dataflow architectures, scalar and vector processor architectures, GPU architectures and programming models, digital signal processing systems, real-time hardware and firmware systems.
\n
Ready to apply? Have more questions? Send us your resume or questions via the form below.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.