Short Update
Maintainer's Comment | 12.02.2025
On the weekend I have Implemented the first PipelineExec
to show how we want to use cubecl-reduce
for the first Iteration of Use-AI.rs.
In the future we want to build our own reduce strategies and configurations. But this will be good enough for now.
With PipelinePush
we will see the first Kernels which fill Tensors depending on the type of PipelinePush
needed for Future execution trough PipelineExec
.
With these reusable Pipeline Operations we can easily compose different AI Models with GPU Optimisation.
This most likely implicates, that we are no longer aim to lower all of lib-calculator
into the GPU. More likely lib-calculator::operation
will be the
only module what will be completely on the GPU Kernel.
Apart from this I decided this week to use XGBoost: A Scalable Tree Boosting System (Tianqi Chen and Carlos Guestrin, 10. Jun 2016) as the paper which will lead us in implementing XGBoost.
Next week we will see more code again!