kobolds-io/stdx
Extensions to the zig standard library
v0.11.1.tar.gz
CAUTION this project is current in development and should be used at your own risk. Until there is a stable tagged release, be careful.
This is a library adding several generally useful tools that are either not included in the standard library or have slightly different behavior. As the zig
programming language matures, we should get more and more awesome std
library features but until then...
All data structures, algorithms and utilities included in this library are written from scratch. This minimizes the threat of malicious or unintentional supply chain attacks. It also ensures that all code is controlled in a single place and HOPEFULLY minimizes the chance that zig
turns into the hellish monstrocity that is npm
and the nodejs
ecosystem.
Using stdx
is just as simple as using any other zig
dependency.
// import the library into your file
const stdx = @import("stdx");
fn main() !void {
// your code
// ....
const memory_pool = try stdx.MemoryPool(i32).init(allocator, 200);
defer memory_pool.deinit();
// your code
// ...
}
Install using zig fetch
zig fetch --save https://github.com/kobolds-io/stdx/archive/refs/tags/v0.1.0.tar.gz
Alternatively, you can install stdx
just like any other zig
dependency by editing your build.zig.zon
file.
.dependencies = .{
.stdx = .{
.url = "https://github.com/kobolds-io/stdx/archive/refs/tags/v0.1.0.tar.gz",
.hash = "",
},
},
run zig build --fetch
to fetch the dependencies. This will return an error as the has will not match. Copy the new hash and try again.Sometimes zig
is helpful and it caches stuff for you in the zig-cache
dir. Try deleting that directory if you see some issues.
In the build.zig
file add the library as a dependency.
// ...boilerplate
const stdx_dep = b.dependency("stdx", .{
.target = target,
.optimize = optimize,
});
const stdx_mod = stdx_dep.module("stdx");
exe.root_module.addImport("stdx", stdx_mod);
This library follows the organization of the zig
std
library. You will see familiar hierarchies like stdx.mem
for memory stuff and std.<DATA_STRUCTURE>
for other data structures. As I build this library out, I'll add more notes and documentation.
There are examples included in this library that go over a brief overview of how each feature can be used. You can build and run examples by performing the following steps. Examples are in the examples directory. Examples are always welcome.
zig build examples
./zig-out/bin/<example_name>
Examples are best used if you modify the code and add print statements to figure out what is going on. Look at the source code files for additional tips on how features work by taking a look at the test
s included in the source code.
There are benchmarks included in this library that you can run your local hardware or target hardware. You can run benchmarksby performing the following steps. Benchmarks are in the benchmarks directory. More benchmarks are always welcome. Benchmarks in this library are written using zbench
by hendriknielander. Please check out that repo and star it and support other zig
developers.
Note Benchmarks are always a point of contention between everyone. One of my goals is to provision some hardware in the cloud that is consistently used as the hardware for all comparisons. Until then, you can run the code locally to test out your performance. These benchmarks are run inside of a virtual machine and the CPU is fully emulated. This means you will see better performance on your native machines.
# with standard optimizations (debug build)
zig build bench
# or with more optimizations
zig build bench -Doptimize=ReleaseFast
Example output
--------------------------------------------------------
Operating System: linux x86_64
CPU: 13th Gen Intel(R) Core(TM) i9-13900K
CPU Cores: 24
Total Memory: 23.299GiB
--------------------------------------------------------
|----------------------------|
| BufferedChannel Benchmarks |
|----------------------------|
benchmark runs total time time/run (avg ± σ) (min ... max) p75 p99 p995
-----------------------------------------------------------------------------------------------------------------------------
send 10000 items 65535 6.528s 99.624us ± 24.238us (88.267us ... 1.665ms) 98.479us 148.124us 203.334us
receive 10000 items 65535 5.16s 78.74us ± 18.052us (76.328us ... 1.938ms) 77.04us 108.562us 126.863us
|-------------------------|
| EventEmitter Benchmarks |
|-------------------------|
benchmark runs total time time/run (avg ± σ) (min ... max) p75 p99 p995
-----------------------------------------------------------------------------------------------------------------------------
emit 1 listeners 10000 65535 2.115s 32.282us ± 10.491us (31.224us ... 1.199ms) 31.372us 46.262us 59.113us
emit 10 listeners 1000 65535 7.025s 107.208us ± 38.115us (102.648us ... 8.099ms) 105.656us 152.058us 173.709us
emit 100 listeners 100 65535 53.051s 809.509us ± 79.015us (779.659us ... 4.203ms) 808.103us 1.077ms 1.285ms
|-----------------------|
| MemoryPool Benchmarks |
|-----------------------|
benchmark runs total time time/run (avg ± σ) (min ... max) p75 p99 p995
-----------------------------------------------------------------------------------------------------------------------------
create 10000 items 65535 11.874s 181.195us ± 36.717us (166.013us ... 2.229ms) 180.053us 301.93us 346.714us
unsafeCreate 10000 ite 65535 9.847s 150.267us ± 71.11us (140.256us ... 16.836ms) 149.657us 201.683us 242.015us
|-----------------------|
| RingBuffer Benchmarks |
|-----------------------|
benchmark runs total time time/run (avg ± σ) (min ... max) p75 p99 p995
-----------------------------------------------------------------------------------------------------------------------------
prepend 10000 items 65535 2.17s 33.123us ± 13.544us (31.778us ... 2.488ms) 32.269us 46.125us 57.12us
enqueue 10000 items 65535 2.064s 31.502us ± 26.092us (29.859us ... 6.049ms) 30.57us 43.329us 51.533us
enqueueMany 10000 item 65535 2.088s 31.865us ± 26.663us (30.021us ... 6.139ms) 30.901us 45.066us 59.795us
dequeue 10000 items 65535 2.083s 31.79us ± 10.708us (30.901us ... 1.051ms) 30.913us 43.756us 54.178us
dequeueMany 10000 item 65535 2.098s 32.017us ± 17.103us (29.992us ... 3.044ms) 30.843us 43.827us 51.629us
concatenate 10000 item 65535 2.133s 32.558us ± 15.808us (31.023us ... 1.811ms) 31.215us 51.127us 71.633us
copy 10000 items 65535 2.145s 32.743us ± 12.285us (30.494us ... 968.812us) 31.83us 57.111us 73.197us
sort 10000 items 65535 29.279s 446.772us ± 66.123us (415.977us ... 9.912ms) 448.537us 625.611us 739.233us
|-------------------|
| Signal Benchmarks |
|-------------------|
benchmark runs total time time/run (avg ± σ) (min ... max) p75 p99 p995
-----------------------------------------------------------------------------------------------------------------------------
send/receive 10000 ite 65535 10.339s 157.772us ± 35.863us (152.653us ... 6.254ms) 155.745us 212.263us 248.463us
-------------------------------|
| UnbufferedChannel Benchmarks |
|------------------------------|
benchmark runs total time time/run (avg ± σ) (min ... max) p75 p99 p995
-----------------------------------------------------------------------------------------------------------------------------
send/receive 10000 ite 65535 18.778s 286.536us ± 82.494us (274.456us ... 18.887ms) 286.244us 406.188us 482.665us
Please see Contributing for more information on how to get involved.
Please see the Code of Conduct file. Simple library, simple rules.
The stdx
top level module. Directly contains data structures and is the parent module to modules like io
and net
.
added v0.0.3 as
stdx.BufferedChannel
The BufferedChannel
is a structure that can be used to safely transmit data across threads. It uses a backing buffer which stores the actual values transmitted. Additionally it has a very simple api send
/receive
and supports concepts like cancellation and timeouts.
See example and source for more information on usage.
added v0.0.3 as
stdx.UnbufferedChannel
The UnbufferedChannel
is a structure that can be used to safely transmit data across threads. It uses a Condition
to notify receivers that there is new data. Additionally it has a very simple api send
/receive
and supports concepts like timeouts but does not currently support cancellation.
See example and source for more information on usage.
added v0.0.8 as
stdx.Signal
The Signal
is a structure that can be used to safely transmit data across threads. Unlike a channel, it does not require that both threads become synchronized at the same point. Think of a Signal
as a way for a sender to throw a value over the fence and a receiver to pick the value at a later time (when it is convenient for the receiver). Signal
s are "one shots", meaning that they should only ever be used once. These structures are ideal for things like request
->reply
kinds of problems.
See example and source for more information on usage.
added v0.0.6 as
stdx.EventEmitter
The EventEmitter
is a tool for managing communications across callbacks. This is a very similar implementation to the nodejs event emitter class which is one of the fundemental building blocks for asynchronous events. The EventEmitter
provides a simple(ish) api to register Callback
s with appropriate Context
s to be called when a specific Event
is called.
See example and source for more information on usage.
added v0.0.2 as
stdx.ManagedQueue
The ManagedQueue
is a generic queue implementation that uses a singly linked list. It allows for the management of a queue with operations like enqueueing, dequeueing, checking if the queue is empty, concatenating two queues, and handles the allocation/deallocation of memory used by the queue. The queue is managed by an allocator, which is used for creating and destroying nodes.
See example and source for more information on usage.
added v0.0.2 as
stdx.UnmanagedQueue
The UnmanagedQueue
is a generic queue implementation that uses a singly linked list. It most closely represents the std.SinglyLinkedList
in its functionality. Differing from the ManagedQueue
, the UnmanagedQueue
requires memory allocations to be external to the queue and provides a generic Node
structure to help link everything together.
Please also see UnmanagedQueueNode
which is the Node
used by the UnmanagedQueue
.
See example and source for more information on usage.
added v0.0.1 as
stdx.RingBuffer
A RingBuffer
is a data structure that is really useful for managing memory in a fixed memory allocation. This particular implementation is particularly useful for a fixed size queue. Kobolds uses the RingBuffer
data structure for inboxes and outboxes for when messages are received/sent through TCP connections.
See example and source for more information on usage.
added v0.0.1 as
stdx.MemoryPool
A MemoryPool
is a structure that uses pre-allocated blocks of memory to quickly allocoate and deallocate resources quickly. It is very useful in situations where you have statically allocated memory but you will have fluctuating usage of that memory. A good example would be handling messages flowing throughout a system.