Jafagervik/zybel
AI/ML library in the making, all written in zig
WARNING zybel is still WIP. Breaking changes may occur
Create a framework for training DNNs in Zig (using hardware accelerators) and compare results
To quote a wise man: "For the joy of programming!"
Generic tensor using comptime
Binops such as add, sub, mul, div
General tensor ops (clamp, reshape, sum, min, max...)
SGD optimizer
Common loss functions such as mse and mae + 3 more
Activation functions
Simple layers
Autograd
Computational graph
Hardware acceleration support
Run this command in the parent directory of your project
zig fetch --save git+https://github.com/Jafagervik/zybel.git
Then add these lines to build.zig before b.installArtifact(exe)
const zybel = b.dependency("zybel", .{});
exe.root_module.addImport("zybel", zybel.module("zybel"));
const std = @import("std");
const zb = @import("zybel");
const Tensor = zb.Tensor;
const TF32 = Tensor(f32);
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
const allocator = gpa.allocator();
defer _ = gpa.deinit();
// Creates a F32 Tensor of shape (1, 3, 3)
var t: TF32 = try TF32.ones(allocator, &[_]u32{ 1, 3, 3 });
defer t.deinit();
std.debug.print("First value is {d:.2}\n", .{t.getFirst()});
bon.setVal(0, 2.0);
std.debug.print("First value is now {d:.2}\n", .{t.getFirst()});
// Prints input about the tensor
t.print();
std.debug.print("Sum is {d:.2}\n", .{t.sum()});
}
Hell yea! Feel free to add suggestions/corrections. I am a mere mortal, not a god