Learn Zig Series (#10) - Project Structure, Modules, and File I/O

in #blog19 days ago

Learn Zig Series (#10) - Project Structure, Modules, and File I/O

zig-banner.png

What will I learn

  • You will learn how to structure a multi-file Zig project with zig init;
  • how @import works for importing your own modules and the standard library;
  • Zig's visibility rules (pub vs private by default);
  • the build.zig build script and build.zig.zon project metadata;
  • reading and writing files with std.fs;
  • buffered I/O and line-by-line reading;
  • stdout vs std.debug.print and when to use which;
  • parsing command-line arguments from std.process.args();
  • how to add C library dependencies from build.zig;
  • combining modules, file I/O, and CLI args into a cohesive multi-file program.

Requirements

  • A working modern computer running macOS, Windows or Ubuntu;
  • An installed Zig 0.14+ distribution (download from ziglang.org);
  • The ambition to learn Zig programming.

Difficulty

  • Beginner

Curriculum (of the Learn Zig Series):

Learn Zig Series (#10) - Project Structure, Modules, and File I/O

Welcome back! In episode #9 we unlocked what I called Zig's single most powerful feature -- comptime. We built generic types by writing functions that return types, created compile-time validated data structures with @compileError, used @typeInfo for full compile-time reflection, wrote data-driven patterns with inline for that bake lookup tables into the binary, and processed strings at compile time without any runtime cost. One keyword -- comptime -- replacing C's preprocessor macros, C++ template metaprogramming, Rust's generics, and Python's metaclasses. All at once.

At the end of ep009 I mentioned that @import itself is a comptime operation, and that the module system is built on the same comptime machinery. Well, this is that episode ;-)

Here's the thing: we've spent nine episodes writing everything in a single .zig file and running it with zig run file.zig. That's fine for learning, for small experiments, for exploring a concept. But real programs -- programs that do useful things, that read and write files, that take command-line arguments, that are split into logical modules -- need more structure. They need a build system, a module system, and I/O.

This episode ties together all the language fundamentals we've covered (types from ep002, error handling from ep004, structs from ep006, allocators from ep007) and shows you how to organize them into a proper project. After today, you'll be able to build something real -- a multi-file program that reads data from disk, processes it, and writes results back. No more toy examples in single files.

Let's dive right in.

Solutions to Episode 9 Exercises

Before we start on new material, here are the solutions to last episode's exercises. As always, if you actually typed these out and compiled them (and I really hope you did!), compare your solutions:

Exercise 1 -- Pair generic:

const std = @import("std");

fn Pair(comptime A: type, comptime B: type) type {
    return struct {
        first: A,
        second: B,

        fn display(self: @This()) void {
            std.debug.print("({}, {})\n", .{ self.first, self.second });
        }
    };
}

pub fn main() void {
    const IntFloat = Pair(i32, f64);
    const StrBool = Pair([]const u8, bool);

    const p1 = IntFloat{ .first = 42, .second = 3.14 };
    const p2 = StrBool{ .first = "active", .second = true };

    p1.display(); // (42, 3.14e0)
    p2.display(); // (active, true)
}

The key insight: Pair(i32, f64) and Pair([]const u8, bool) are two completely different types, generated by the same function. @This() inside the returned anonymous struct refers to that specific generated type. If you tried to assign a Pair(i32, f64) to a variable typed as Pair([]const u8, bool), the compiler would reject it -- they're as different as u8 and f64.

Exercise 2 -- generateSquares:

fn generateSquares(comptime n: usize) [n]u32 {
    comptime {
        var result: [n]u32 = undefined;
        for (0..n) |i| {
            result[i] = @intCast(i * i);
        }
        return result;
    }
}
// const squares = generateSquares(10); // [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]

Everything inside the comptime { } block executes during compilation. The returned array is baked into the binary as a constant -- no runtime computation.

Exercise 3 -- @typeInfo struct inspector: use inline for (@typeInfo(T).@"struct".fields) and print each field.name, @typeName(field.type), and @offsetOf(T, field.name). Same pattern as the describeStruct function from the episode.

Exercise 4 -- BoundedValue: if (comptime min >= max) @compileError("min must be less than max"); at the top. The set method clamps: self.value = @max(min, @min(max, new_val));. Trying BoundedValue(100, 50) gives a clear compile error before the program ever exists.

Exercise 5 -- Comptime lookup table: define const http_codes = [_]struct { code: u16, text: []const u8 }{ ... }; and use inline for to search. With a comptime argument the search resolves at compile time; with a runtime argument the unrolled comparisons run as sequential if checks.

Exercise 6 -- Generic Stack: similar structure to the RingBuffer from ep009 but simpler. push returns false when full, pop returns ?T, peek returns ?T. The comptime validation if (max_size == 0) @compileError(...) catches bad configurations at compile time.

Now -- project structure!

Creating a Project with zig init

Every Zig project we've written so far has been a single file: zig run prices.zig or zig run buffer.zig. That's the Zig equivalent of running a Python script with python3 script.py. Quick, easy, great for exploration. But when your program grows beyond a few hundred lines -- when you want to split logic into separate files, when you need to read config files from disk, when you want a proper build pipeline -- you need a project.

mkdir price-tracker && cd price-tracker
zig init

This generates the following structure:

price-tracker/
  build.zig         -- the build script (this is Zig code!)
  build.zig.zon     -- project metadata (dependencies, versioning)
  src/
    main.zig         -- your program's entry point
    root.zig         -- library root (for building libraries)

Two things to notice immediately. First, there's no Makefile, no CMakeLists.txt, no Cargo.toml, no package.json. There's build.zig -- a Zig program that describes how to build your Zig program. Your build system is written in the same language as your project. Second, build.zig.zon handles package metadata (name, version, dependencies) in Zig's object notation format. Together they replace every external build tool you'd otherwise need.

Let me show you the build.zig that zig init generates (simplified for clarity):

const std = @import("std");

pub fn build(b: *std.Build) void {
    const target = b.standardTargetOptions(.{});
    const optimize = b.standardOptimizeOption(.{});

    const exe = b.addExecutable(.{
        .name = "price-tracker",
        .root_source_file = b.path("src/main.zig"),
        .target = target,
        .optimize = optimize,
    });
    b.installArtifact(exe);

    const run_cmd = b.addRunArtifact(exe);
    run_cmd.step.dependOn(&b.default_step);
    if (b.args) |args| {
        run_cmd.addArgs(args);
    }

    const run_step = b.step("run", "Run the application");
    run_step.dependOn(&run_cmd.step);
}

Read that signature: pub fn build(b: *std.Build) void. It takes a pointer to a Build object (remember *T pointer parameters from ep008?) and uses it to declare build artifacts, steps, and dependencies. The b.standardTargetOptions(.{}) gives you cross-compilation for free -- you can target Linux, macOS, Windows, ARM, WASM, all from the same build script. The b.standardOptimizeOption(.{}) lets you choose between Debug, ReleaseSafe, ReleaseFast, and ReleaseSmall at build time.

To build and run:

zig build run

That's it. zig build invokes build.zig, which compiles your project, and the run step executes it. No configure && make && make install. No npm run build. Just zig build run.

If you want to just compile without running: zig build. The binary goes into zig-out/bin/price-tracker. If you want to pass arguments to your program: zig build run -- arg1 arg2 arg3 (the -- separates build arguments from program arguments).

Modules -- Splitting Code Across Files

Here's where things get interesting. In Zig, every .zig file is a module. Not "can be used as a module" -- it IS a module. The file is the module boundary, and @import brings it in. No special declaration needed, no module keyword, no export statement at the bottom.

Remember @import("std") that we've been writing since ep002? That imports the entire standard library as a module. The same mechanism works for your own files. Let me show you.

Create src/exchange.zig:

const std = @import("std");

pub const MAX_PAIRS = 64;

const internal_counter: u32 = 0; // NOT visible outside this file

pub const TradingPair = struct {
    base: []const u8,
    quote: []const u8,
    price: f64 = 0,
    volume_24h: f64 = 0,

    pub fn display(self: TradingPair) void {
        std.debug.print("{s}/{s}: ${d:.2} (vol: ${d:.0})\n", .{
            self.base, self.quote, self.price, self.volume_24h,
        });
    }

    pub fn spreadPct(self: TradingPair, bid: f64, ask: f64) f64 {
        _ = self;
        if (bid == 0) return 0;
        return (ask - bid) / bid * 100.0;
    }
};

pub const Exchange = struct {
    name: []const u8,
    pairs: [MAX_PAIRS]?TradingPair = [_]?TradingPair{null} ** MAX_PAIRS,
    pair_count: u32 = 0,

    pub fn addPair(self: *Exchange, base: []const u8, quote: []const u8, price: f64) !void {
        if (self.pair_count >= MAX_PAIRS) return error.TooManyPairs;
        self.pairs[self.pair_count] = TradingPair{
            .base = base,
            .quote = quote,
            .price = price,
        };
        self.pair_count += 1;
    }

    pub fn displayAll(self: Exchange) void {
        std.debug.print("=== {s} ({d} pairs) ===\n", .{ self.name, self.pair_count });
        for (self.pairs[0..self.pair_count]) |maybe_pair| {
            if (maybe_pair) |pair| pair.display();
        }
    }

    pub fn findPair(self: Exchange, base: []const u8) ?TradingPair {
        for (self.pairs[0..self.pair_count]) |maybe_pair| {
            if (maybe_pair) |pair| {
                if (std.mem.eql(u8, pair.base, base)) return pair;
            }
        }
        return null;
    }
};

Now src/main.zig:

const std = @import("std");
const exchange = @import("exchange.zig");

pub fn main() !void {
    var binance = exchange.Exchange{ .name = "Binance" };
    try binance.addPair("BTC", "USD", 68423.50);
    try binance.addPair("ETH", "USD", 3201.75);
    try binance.addPair("SOL", "USD", 142.30);

    binance.displayAll();

    std.debug.print("\n", .{});
    if (binance.findPair("ETH")) |eth| {
        std.debug.print("Found: ", .{});
        eth.display();
    }

    if (binance.findPair("DOGE")) |_| {
        std.debug.print("Found DOGE\n", .{});
    } else {
        std.debug.print("DOGE not listed\n", .{});
    }
}

Output:

=== Binance (3 pairs) ===
BTC/USD: $68423.50 (vol: $0)
ETH/USD: $3201.75 (vol: $0)
SOL/USD: $142.30 (vol: $0)

Found: ETH/USD: $3201.75 (vol: $0)
DOGE not listed

Let me unpack the key points.

@import("exchange.zig") imports the file as a struct-like namespace. You access its public declarations with dot syntax: exchange.Exchange, exchange.TradingPair, exchange.MAX_PAIRS. The import is a comptime operation -- the compiler resolves it during compilation, not at runtime. There's no dynamic module loading, no import failures at runtime, no ModuleNotFoundError. If the file doesn't exist, the compiler tells you during zig build.

pub controls visibility. Only declarations marked pub are visible outside the file. internal_counter in exchange.zig is NOT pub, so main.zig cannot access it. This is the same pub keyword we've seen on struct methods since ep006 -- applied to the file level. Everything is private by default. You explicitly opt into exposing things. Compare this with Python, where everything in a module is public by default and the _ prefix is just a convention that nothing enforces.

One important detail: in build.zig, the root source file is src/main.zig. When main.zig does @import("exchange.zig"), the compiler looks for exchange.zig in the same directory (src/). File-level imports are relative to the importing file. Standard library imports (@import("std")) are special -- the compiler knows where to find the standard library.

You can nest modules in subdirectories too. If you had src/models/exchange.zig, you'd import it as @import("models/exchange.zig"). Alternatively, for larger projects, you register modules in build.zig and give them names -- but for our purposes, relative imports work perfectly.

File I/O -- Reading and Writing

Nine episodes of learning Zig and we haven't written a single byte to disk. Time to fix that ;-)

File I/O in Zig works through std.fs -- the filesystem module. As you'd expect from Zig, everything that can fail returns an error union, so you try every operation. No silent failures. No exceptions flying across call stacks. Every error is handled at the call site, just like we learned in ep004.

Writing to a File

const std = @import("std");

pub fn main() !void {
    const file = try std.fs.cwd().createFile("portfolio.txt", .{});
    defer file.close();

    const writer = file.writer();
    try writer.print("BTC 0.5000 68000.00\n", .{});
    try writer.print("ETH 4.2000 3200.00\n", .{});
    try writer.print("SOL 25.0000 142.30\n", .{});
    try writer.print("AVAX 100.0000 34.50\n", .{});

    std.debug.print("Wrote portfolio to portfolio.txt\n", .{});
}

std.fs.cwd() returns a handle to the current working directory. .createFile("portfolio.txt", .{}) creates (or truncates) the file and returns a File handle. The second argument is an options struct -- the .{} gives you defaults, which means "create if missing, truncate if exists, mode 0o644". The try is required because file creation can fail (permissions, disk full, invalid path).

defer file.close() -- same defer pattern from ep004 and ep007. You declare cleanup right after acquisition, and the compiler guarantees it runs when the scope exits, regardless of whether the function returns normally or with an error. No resource leaks.

file.writer() returns a Writer interface -- a buffered writer that accumulates data before flushing to disk. The .print() method works exactly like std.debug.print with format strings, except it writes to the file instead of stderr.

Reading an Entire File

const std = @import("std");

pub fn main() !void {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();
    const allocator = gpa.allocator();

    const content = try std.fs.cwd().readFileAlloc(
        allocator,
        "portfolio.txt",
        1024 * 1024, // max 1 MB
    );
    defer allocator.free(content);

    std.debug.print("File contents ({d} bytes):\n{s}\n", .{ content.len, content });
}

Output:

File contents (80 bytes):
BTC 0.5000 68000.00
ETH 4.2000 3200.00
SOL 25.0000 142.30
AVAX 100.0000 34.50

readFileAlloc reads the entire file into a heap-allocated buffer. It needs an allocator (same std.mem.Allocator interface from ep007) and a maximum size limit. If the file exceeds the limit, it returns an error -- no surprise out-of-memory from trying to read a 4 GB file into RAM. The returned content is a []u8 slice that you own and must free. The defer allocator.free(content) handles that.

This is the simplest approach for small files. Read everything into memory, process it, done. For larger files, you want line-by-line reading.

Reading Line by Line

const std = @import("std");

pub fn main() !void {
    const file = try std.fs.cwd().openFile("portfolio.txt", .{});
    defer file.close();

    var buf_reader = std.io.bufferedReader(file.reader());
    var line_buf: [1024]u8 = undefined;

    var line_num: u32 = 0;
    while (buf_reader.reader().readUntilDelimiterOrEof(&line_buf, '\n')) |maybe_line| {
        if (maybe_line) |line| {
            line_num += 1;
            std.debug.print("[{d}] {s}\n", .{ line_num, line });
        } else break;
    } else |err| {
        std.debug.print("Read error: {}\n", .{err});
    }

    std.debug.print("\nTotal: {d} lines\n", .{line_num});
}

Output:

[1] BTC 0.5000 68000.00
[2] ETH 4.2000 3200.00
[3] SOL 25.0000 142.30
[4] AVAX 100.0000 34.50

Total: 4 lines

Let me break this down, because there are quite some layers.

std.io.bufferedReader wraps the raw file reader in a buffered reader. Instead of making a system call for every byte, it reads chunks at a time into an internal buffer. Same concept as Python's io.BufferedReader or C's fread -- amortize the cost of system calls across many small reads.

readUntilDelimiterOrEof reads bytes into line_buf until it hits '\n' (our delimiter) or the end of file. It returns ?[]u8 -- an optional slice. If it read a line, you get the slice. If it reached EOF, you get null. The while loop with optional unwrapping (same pattern from ep004) handles both cases naturally.

var line_buf: [1024]u8 = undefined; -- a stack-allocated buffer. No heap allocation needed. The undefined initialization means we don't waste cycles zeroing it out, because readUntilDelimiterOrEof will fill it with actual data before we read from it. Same principle from ep007 and ep009.

Notice how every I/O operation composes with Zig's existing patterns. Error handling with try and catch. Resource cleanup with defer. Buffers on the stack or heap (your choice). Optionals for "maybe there's more data, maybe not." No new patterns to learn -- just the same ones we've been building since ep004, applied to files.

stdout vs std.debug.print

We've been using std.debug.print since ep002 for all our output. Time to learn the difference:

const std = @import("std");

pub fn main() !void {
    const stdout = std.io.getStdOut().writer();
    const stderr = std.io.getStdErr().writer();

    try stdout.print("This goes to stdout (data)\n", .{});
    try stderr.print("This goes to stderr (diagnostics)\n", .{});
    std.debug.print("This also goes to stderr\n", .{});
}

std.debug.print writes to stderr and is designed for diagnostics -- debug messages, error output, development logging. It doesn't require try because it ignores write errors (if stderr is broken, there's nowhere to report the error anyway).

std.io.getStdOut().writer() gives you a proper writer to stdout. This is where your program's actual output should go -- data that other programs might consume, results that should be redirectable. The try is required because stdout writes CAN fail (broken pipe, full disk when redirecting to a file).

Why does this matter? Piping and redirection. When you run ./program > output.txt, only stdout goes to the file. Stderr still shows in the terminal. When you run ./program | other-program, stdout goes to the pipe and stderr stays visible. Using the right stream means your programs compose correctly with other tools.

For our tutorials, std.debug.print has been perfectly fine. For real programs that produce output others might consume, use stdout. It's a small distinction, but it matters when you start building tools that work together.

Command-Line Arguments

Real programs need input, and the most basic form of input is command-line arguments:

const std = @import("std");

pub fn main() !void {
    const stdout = std.io.getStdOut().writer();
    var args = std.process.args();

    // First arg is always the program name
    const program_name = args.next().?;
    try stdout.print("Program: {s}\n", .{program_name});

    var count: u32 = 0;
    while (args.next()) |arg| {
        count += 1;
        try stdout.print("  arg[{d}]: {s}\n", .{ count, arg });
    }

    if (count == 0) {
        try stdout.print("No arguments provided.\n", .{});
        try stdout.print("Usage: price-tracker <ticker1> <ticker2> ...\n", .{});
    } else {
        try stdout.print("\nTotal: {d} arguments\n", .{count});
    }
}

Run it: zig build run -- BTC ETH SOL AVAX

Output:

Program: zig-out/bin/price-tracker
  arg[1]: BTC
  arg[2]: ETH
  arg[3]: SOL
  arg[4]: AVAX

Total: 4 arguments

std.process.args() returns an iterator over the command-line arguments. The first argument is always the program's path (just like sys.argv[0] in Python). Each call to .next() returns ?[:0]const u8 -- an optional sentinel-terminated string. When there are no more arguments, .next() returns null and the while loop exits.

The -- in zig build run -- BTC ETH SOL separates build-system arguments from program arguments. Without the --, zig build would try to interpret BTC as a build option and complain.

Adding C Library Dependencies

One of Zig's biggest practical advantages is seamless C interop. You can link against any C library directly from build.zig, and then call C functions from Zig without any FFI ceremony:

// In build.zig -- linking a system library:
exe.linkSystemLibrary("sqlite3");
exe.linkLibC();

Then in your Zig code:

const c = @cImport({
    @cInclude("sqlite3.h");
});

// Now you can call C functions directly
var db: ?*c.sqlite3 = null;
const rc = c.sqlite3_open("data.db", &db);
if (rc != c.SQLITE_OK) {
    std.debug.print("Failed to open database\n", .{});
}

@cImport reads a C header file and translates all the C type definitions, function declarations, and constants into Zig types. The entire ecosystem of C libraries -- SQLite, OpenSSL, curl, zlib, every single one -- is immediately available in your Zig programs with full type safety. No writing bindings by hand. No FFI generator. No unsafe blocks (like Rust requires). The compiler reads the C header and does the translation for you.

exe.linkSystemLibrary("sqlite3") tells the build system to link against the system's installed libsqlite3. exe.linkLibC() links the C standard library, which most C libraries depend on.

For vendored dependencies (C source code included in your project), the build system can compile them too:

// In build.zig -- compiling C source directly:
exe.addCSourceFile(.{
    .file = b.path("deps/miniz/miniz.c"),
    .flags = &.{ "-std=c99" },
});
exe.addIncludePath(b.path("deps/miniz/"));

This compiles miniz.c as part of your build, with the -std=c99 flag, and adds its include directory to the search path. Your Zig code can then @cImport({ @cInclude("miniz.h"); }) and use the compression library directly.

This is a BIG deal. C has had 50+ years to accumulate libraries for everything -- compression, cryptography, networking, database access, image processing, audio, graphics. Zig gives you access to all of that with zero friction, and adds memory safety, error handling, and a modern type system on top. You're not starting from scratch -- you're standing on the shoulders of the entire C ecosystem.

Bringing It Together: A Multi-File Data Processor

Let me show you how modules, file I/O, and command-line arguments combine into something that resembles a real program. We'll build a simple portfolio data processor that reads holdings from a file, computes values, and writes a report.

src/portfolio.zig -- the data module:

const std = @import("std");

pub const Holding = struct {
    ticker: [8]u8 = [_]u8{0} ** 8,
    ticker_len: u8 = 0,
    quantity: f64 = 0,
    price: f64 = 0,

    pub fn value(self: Holding) f64 {
        return self.quantity * self.price;
    }

    pub fn tickerSlice(self: *const Holding) []const u8 {
        return self.ticker[0..self.ticker_len];
    }

    pub fn display(self: *const Holding, writer: anytype) !void {
        try writer.print("  {s}: {d:.4} @ ${d:.2} = ${d:.2}\n", .{
            self.tickerSlice(),
            self.quantity,
            self.price,
            self.value(),
        });
    }
};

pub const Portfolio = struct {
    holdings: [32]Holding = [_]Holding{.{}} ** 32,
    count: u32 = 0,

    pub fn addHolding(self: *Portfolio, ticker: []const u8, qty: f64, price: f64) !void {
        if (self.count >= 32) return error.PortfolioFull;
        var h = Holding{ .quantity = qty, .price = price };
        const len: u8 = @intCast(@min(ticker.len, 8));
        @memcpy(h.ticker[0..len], ticker[0..len]);
        h.ticker_len = len;
        self.holdings[self.count] = h;
        self.count += 1;
    }

    pub fn totalValue(self: Portfolio) f64 {
        var total: f64 = 0;
        for (self.holdings[0..self.count]) |h| {
            total += h.value();
        }
        return total;
    }

    pub fn largestHolding(self: Portfolio) ?Holding {
        if (self.count == 0) return null;
        var best = self.holdings[0];
        for (self.holdings[1..self.count]) |h| {
            if (h.value() > best.value()) best = h;
        }
        return best;
    }
};

src/storage.zig -- file I/O module:

const std = @import("std");
const portfolio_mod = @import("portfolio.zig");

pub fn loadPortfolio(
    allocator: std.mem.Allocator,
    path: []const u8,
    p: *portfolio_mod.Portfolio,
) !void {
    const content = std.fs.cwd().readFileAlloc(allocator, path, 1024 * 1024) catch |err| {
        std.debug.print("Could not read '{s}': {}\n", .{ path, err });
        return err;
    };
    defer allocator.free(content);

    var line_iter = std.mem.splitScalar(u8, content, '\n');
    while (line_iter.next()) |line| {
        if (line.len == 0) continue;

        var parts = std.mem.splitScalar(u8, line, ' ');
        const ticker = parts.next() orelse continue;
        const qty_str = parts.next() orelse continue;
        const price_str = parts.next() orelse continue;

        const qty = std.fmt.parseFloat(f64, qty_str) catch continue;
        const price = std.fmt.parseFloat(f64, price_str) catch continue;

        p.addHolding(ticker, qty, price) catch |err| {
            std.debug.print("Warning: could not add {s}: {}\n", .{ ticker, err });
        };
    }
}

pub fn writeReport(p: portfolio_mod.Portfolio, path: []const u8) !void {
    const file = try std.fs.cwd().createFile(path, .{});
    defer file.close();
    const writer = file.writer();

    try writer.print("=== Portfolio Report ===\n\n", .{});
    try writer.print("Holdings ({d}):\n", .{p.count});
    for (p.holdings[0..p.count]) |h| {
        try h.display(writer);
    }

    try writer.print("\nTotal Value: ${d:.2}\n", .{p.totalValue()});
    if (p.largestHolding()) |best| {
        try writer.print("Largest Position: {s} (${d:.2})\n", .{
            best.tickerSlice(), best.value(),
        });
    }
}

src/main.zig -- the coordinator:

const std = @import("std");
const portfolio_mod = @import("portfolio.zig");
const storage = @import("storage.zig");

pub fn main() !void {
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();
    const allocator = gpa.allocator();

    const stdout = std.io.getStdOut().writer();

    // Parse command-line arguments
    var args = std.process.args();
    _ = args.next(); // skip program name

    const input_path = args.next() orelse {
        try stdout.print("Usage: price-tracker <input.txt> [output.txt]\n", .{});
        return;
    };

    const output_path = args.next() orelse "report.txt";

    // Load portfolio from file
    var p = portfolio_mod.Portfolio{};
    try storage.loadPortfolio(allocator, input_path, &p);

    // Display to terminal
    try stdout.print("Loaded {d} holdings from {s}\n\n", .{ p.count, input_path });
    for (p.holdings[0..p.count]) |h| {
        try h.display(stdout);
    }
    try stdout.print("\nTotal: ${d:.2}\n", .{p.totalValue()});

    // Write report to file
    try storage.writeReport(p, output_path);
    try stdout.print("\nReport written to {s}\n", .{output_path});
}

Run it:

zig build run -- portfolio.txt report.txt

Output:

Loaded 4 holdings from portfolio.txt

  BTC: 0.5000 @ $68000.00 = $34000.00
  ETH: 4.2000 @ $3200.00 = $13440.00
  SOL: 25.0000 @ $142.30 = $3557.50
  AVAX: 100.0000 @ $34.50 = $3450.00

Total: $54447.50

Report written to report.txt

Let me walk through the architecture decisions, because this is the first time we've built something with multiple cooperating modules.

Separation of concerns. portfolio.zig knows about data structures -- Holding and Portfolio. It doesn't know about files. storage.zig knows about reading and writing files. It uses portfolio.zig's types but doesn't know about the command line. main.zig coordinates everything -- it parses arguments, calls storage.loadPortfolio, displays results, and calls storage.writeReport. Each module has one job. This is not a Zig-specific pattern -- it's good software engineering in any language. But Zig's module system makes it natural.

The writer: anytype pattern in Holding.display is worth noting. The anytype parameter means this function accepts ANY type that has a print method with the right signature. We can pass it a file writer, a stdout writer, a buffered writer, anything -- it works with all of them. This is Zig's approach to polymorphism: compile-time duck typing via anytype (which the compiler resolves at comptime, generating specialized code for each call site). No virtual dispatch. No interface overhead. Same speed as calling the concrete method directly.

std.mem.splitScalar splits a byte slice on a delimiter character. It returns an iterator (no allocation!) that yields slices into the original data. We use it to split file content by newlines and then split each line by spaces. Same zero-allocation, zero-copy philosophy we've seen throughout the standard library.

std.fmt.parseFloat converts a string to a float, returning an error union if the string isn't valid. The catch continue skips malformed lines instead of crashing -- graceful degradation. In a production program you might want to log these errors, but for a tutorial, skipping is fine.

The allocator flows from main. Notice that main.zig creates the GPA and passes its allocator to storage.loadPortfolio. The storage module doesn't create its own allocator -- it receives one. Same allocator parameter pattern from ep007. This means you can test loadPortfolio with a different allocator (say, a FixedBufferAllocator for testing), and you can verify zero leaks with the GPA. The pattern composes across module boundaries.

One more thing: Portfolio uses fixed-size arrays ([32]Holding) instead of ArrayList. This means no allocator needed for the portfolio itself. The 32-holding limit is a design choice -- for a small personal tracker, it's more than enough. If you needed hundreds of holdings, you'd switch to an ArrayList(Holding) and pass an allocator. But fixed-size keeps things simple and stack-allocated, which is perfect for learning and for many real-world scenarios too.

Building and Running

The complete workflow for a multi-file project:

# Create the project
mkdir price-tracker && cd price-tracker
zig init

# Write your source files in src/
# (main.zig, portfolio.zig, storage.zig)

# Create a test data file
echo "BTC 0.5 68000" > portfolio.txt
echo "ETH 4.2 3200" >> portfolio.txt
echo "SOL 25 142.3" >> portfolio.txt

# Build and run
zig build run -- portfolio.txt report.txt

# Just build (output in zig-out/bin/)
zig build

# Build with optimizations
zig build -Doptimize=ReleaseFast

# Cross-compile for Linux ARM
zig build -Dtarget=aarch64-linux

That last one is worth emphasizing. zig build -Dtarget=aarch64-linux cross-compiles your program for ARM64 Linux -- from macOS, from Windows, from whatever you're running. No installing a cross-toolchain. No Docker. No special linker. Zig ships with the ability to target every major platform out of the box, including cross-compiling C dependencies. If you've ever tried to cross-compile a C project with gcc, you know what a nightmare that can be. Zig makes it a command-line flag.

When to Split Files

A practical question: when should you split a single-file program into multiple files?

My rule of thumb: split when a file exceeds ~300 lines, or when you can name two distinct responsibilites. If you have "data types" and "file operations" in the same file, those are two responsibilities -- split them. If you have "a price tracker" and "a report generator," split them. Each file should have one clear purpose that you can describe in a sentence.

Don't split prematurely though. A 100-line program in one file is perfectly fine. A 50-line utils.zig with two unrelated functions is worse than keeping those functions where they're used. Split for clarity, not for the sake of splitting.

As your projects grow, you'll find natural boundaries emerge. Data models in one module. I/O in another. Business logic in a third. The CLI argument parsing and orchestration in main.zig. Zig's module system -- just files and @import -- makes these splits cheap and reversible. No package managers to configure, no build system to update (as long as files are in the src/ directory).

Exercises

You know the drill by now. Type these out, compile them, and actually run them. File I/O exercises are especially important to do for real -- you need to see the files appear on disk, read them back, verify the contents match.

  1. Create a two-file project with zig init. In src/math_utils.zig, export three functions: add, multiply, and power (integer exponentiation). In src/main.zig, import the module and call all three functions, printing results to stdout (not debug.print). Verify that non-pub declarations in math_utils.zig are not accesible from main.zig.

  2. Write a program that creates a file with 10 lines of data (any format), then reads it back and counts: total lines, total characters, and average characters per line. Print all three stats to stdout.

  3. Accept a filename as a command-line argument. If no argument is given, print a usage message and exit. If the file exists, print its contents with line numbers. If it doesn't exist, print an error message. Handle all three cases gracefully.

  4. Create a src/tracker.zig module with a Tracker struct that has addEntry(ticker: []const u8, value: f64) and summary() methods. The summary method should print ticker, value, and percentage of total for each entry. Use this module from main.zig to track 5 items and print the summary. The display method should accept anytype as the writer parameter so it works with both stdout and file writers.

  5. Build a simple file copy tool: take two command-line arguments (source and destination), read the source file entirely into memory, write it to the destination, and print the number of bytes copied. Use the GPA and verify zero leaks. Handle missing source file, unwritable destination, and missing arguments gracefully with distinct error messages.

  6. Combine file I/O with the RingBuffer from ep009: write a program that reads numbers from a file (one per line), pushes each into a RingBuffer(f64, 10){}, and after reading all numbers, prints the last 10 values and their average. Test with a file containing 20+ numbers. No heap allocation should be needed -- the ring buffer is on the stack.

Exercises 1-2 test the basics of modules and file I/O. Exercise 3 adds command-line argument handling. Exercise 4 tests the anytype writer pattern across modules. Exercise 5 is a complete utility program with proper error handling. Exercise 6 ties this episode back to comptime data structures from ep009.

What's Next

Ten episodes in. We've covered types, functions, control flow, error handling, arrays, slices, strings, structs, enums, tagged unions, memory management, allocators, pointers, memory layout, comptime, modules, file I/O, and command-line arguments. That's a LOT of ground -- the entire foundation of the language.

Everything from here builds on these foundations. We now have all the tools to build a real, multi-file program that processes data, reads files, writes reports, and handles errors correctly. All the pieces are in place. No more isolated concepts -- we combine them.

Having said that, knowing the pieces and knowing how to assemble them into something real are different skills. Architecting a program -- deciding which data structures to use, where to split modules, how to handle errors at different layers, when to allocate and when to use fixed buffers -- that's where the real learning happens. You've got the vocabulary now. Time to write some sentences ;-)

Bedankt en tot de volgende keer! ;-)

@scipio