Blog

This section of the site is dedicated to all things I find interesting, including (but not limited to) computing, computer science, programming languages, I try to categorise posts, but have not been very strict with it.

This page only shows the last ten posts. For a comprehensive list, please see the archive.

Manage Rust installations on Gentoo with rustup

I love Gentoo. I love Rust. How about Rust on Gentoo? Well, I love it, but it’s a little more complicated.

This is a run-down of the steps I ended up taking to be able to manage the Rust installation via rustup on my Gentoo and have Portage use it for ebuilds requiring virtual/rust.

I’m a happy Gentoo user at home, building kernels and emerging all day. I’m also a happy Rust developer who uses both stable and nightly (sometimes several nightlies). When the new Firefox came, I was excited to build it on Gentoo and enabled it with the ~amd64 keyword.

Parts of the new Firefox has been replaced by bits written in Rust, so naturally the Firefox ebuild depends on virtual/rust. This means building the Rust compiler as part of the Firefox build process (the first time only though).

The only problem is I already have Rust installed - in several versions even. So there is really no need for Portage to download and install its own version.

In short, I want to:

  • Manage rustc and cargo with rustup
  • Have Portage pick up the correct rustc and cargo (ie the rustup provided ones)

If no one has told you before, read:

$ man portage

Then read it again. It is a thorough description of all the features of Portage. I found this little nugget describing the package.provided feature:

A list of packages (one per line) that portage should assume have been provided. […] For example, if you manage your own copy of a 2.6 kernel, then you can tell portage that ‘sys-kernel/development-sources-2.6.7’ is already taken care of and it should get off your back about it.

What more do you want? I figured I could put package.provided in /etc/portage but it actually has to go in /etc/portage/profile. I ended up with:

$ tree /etc/portage/profile
/etc/portage/profile
├── package.provided
│   └── rust
└── profile.bashrc

1 directory, 2 files

The contents of /etc/portage/profile/package.provided/rust:

$ cat /etc/portage/profile/package.provided/rust
dev-lang/rust-1.25.0
virtual/rust-1.25.0
dev-util/cargo-0.26.0

It is a requirement that the version number be specified, which I think is fair. Rust and Firefox are slow-moving enough that it shouldn’t be a problem for me to keep up.

I tested this by unmerging virtual/rust, dev-lang/rust and dev-util/cargo, then re-emerging www-client/firefox. Until I figured out how to write the packaged.provided Portage kept wanting to merge the unmerged packages.

After package.provided/rust was accepted, the build failed with the message “Rust compiler not found”. This is because I installed Rust and Cargo as my everyday user, but this is not the user that Portage is running as. Somehow I had to put $HOME/.cargo/bin on the PATH for the build user. I went back to reading man portage.

The solution turned out to be profile.bashrc. I’m not sure this is the best approach, but this is what I have working now:

$ cat /etc/portage/profile/profile.bashrc
export PATH="/home/me/.cargo/bin:$PATH"

STABLE=/home/me/.rustup/toolchains/stable-x86_64-unknown-linux-gnu
rustup toolchain link build-stable $STABLE &> /dev/null
rustup default build-stable &> /dev/null

The calls to rustup are necessary to expose the stable toolchain to the rustc binary found in .cargo/bin. The name build-stable is not important, it just can’t be stable.

That is all it took! Now I have rustup managing the Rust toolchains (it’s a great tool), and Portage happily uses the stable toolchain for packages requiring virtual/rust.

Futures from Scratch

This post describes my journey in implementing Future from scratch for use with tokio to write asynchronous tasks.

tl;dr: It is possible, but not in the way I want it.

Goal

I have a simple application, redshift-rs, that runs in the background and should be able to perform some simple tasks concurrently:

  • Listen for Crtl+C interrupt (SIGINT and SIGTERM preferably) and begin shutting down once received

  • Every five seconds, check to see if display settings need to be adjusted. When transitioning from day to night or night to day, this should be checked every 100ms.

The application already does this, but using threads and channels; one thread listens for signals (using the chan and chan-signal crates), another thread is a timer thread that sleeps for either five seconds or 100ms and sends a message. The main thread then selects over the channels provided to each of the background threads.

If this were Go, I’d probably stick with something like the current implementation because the Go runtime provides green threading and the three tasks outlined above would probably not use three OS threads. But this is Rust and there is (no longer) green threading.

None of the tasks are computation heavy, so it’d make more sense to have one task handling all the details: checking signals at the right time or adjusting display settings. Instead of writing this by hand, I figure this is something that Tokio and the futures library should be able to do.

Overall design

We probably want to model timeouts as a Stream and not a future, because we want a occurrences every x ms.

First steps

To get started, I started looking at just implementing a Future that expires after a given Duration.

struct Sleep {
    when: Instant,
}

impl Sleep {
    fn new(dur: Duration) -> Sleep {
        Sleep {
            when: Instant::now() + dur,
        }
    }

    fn is_expired(&self) -> bool {
        Instant::new() >= self.when
    }
}

It is created at some point in time and should expire after some indicated duration. We are not worried about precision.

Next we want to implement Future for Sleep. The following code could be our first attempt:

impl Future for Sleep {
    type Item = ();
    type Error = ();

    fn poll(&mut self) -> Result<Async<()>, ()> {
        if self.is_expired() {
            Ok(Async::Ready(()))
        } else {
            Ok(Async::NotReady)
        }
    }
}

Seems simple, right? It doesn’t quite work though. To see that let us tie everything together.

fn main() {
    let sleep = Sleep::new(Duration::from_secs(5));
    current_thread::run(|_| {
        current_thread::spawn(sleep);
        println!("Started!");
    });
    println!("Finished!");
}

This program prints “Started!” and then just hangs. On closer inspection (aka throwing println!() in various places) tells us that poll() is called once, before it is expired and returns Async::NotReady. So why isn’t it called again?

Learning more about Async::NotReady

Returning Async::NotReady is special in Tokio. It is the responsibility of the executor handling your task to make sure that poll() is called, ideally at the right time. But how is the executor supposed to know when it is the right time?

At this point, I was a little frustrated with the documentation on Tokio. Most (if not all) the implementations of Future and Stream depend on other things that already implement those traits and the example implementations rely on the inner implementations to do the “right thing”.

From the section on futures there is a better hint:

[…] when a task returns NotReady, once it transitioned to the ready state the executor is notified. […] When a function returns Async::NotReady, it is critical that the executor is notified when the state transitions to “ready”. Otherwise, the task will hang infinitely, never getting run again.

Well, this explains why our first implementation didn’t work.

Innermost futures, sometimes called “resources”, are the ones responsible for notifying the executor. This is done by calling notify on the task returned by task::current(). […] Before an executor calls poll on a task, it sets the task context to a thread-local variable. The inner most future then accesses the context from the thread-local so that it is able to notify the task once its readiness state changes.

So to have our Sleep future polled again we just need to notify the current task? Let us try this then:

impl Future for Sleep {
    ...
    fn poll(&mut self) -> Result<Async<()>, ()> {
        if self.is_expired() {
            Ok(Async::Ready(()))
        } else {
            let task = task.current();
            task.notify();
            Ok(Async::NotReady)
        }
    }
}

This works! Our program now prints

$ cargo run
Started!
Finished!

with five seconds in between the two lines. But this is not the “correct way” to do it. Whenever our task is polled it notifies the executor that it is ready to be polled again, which is not true for a whole five seconds. If we count the number of times our task is polled, we get something like this:

$ cargo run
Started!
Polled 2915047 times
Finished!

Polled almost three million times (and that is not even --release)! We are essentially just busy-waiting here, which we certainly do not want to do unless necessary.

An aside: tokio-timer

I found a library called tokio-timer that implements timing functionality for Tokio-based applications, which sounds like exactly what we are looking to do ourselves. But it is implemented with a background std::thread which is exactly what we are trying to avoid.

Conclusion

A little under two months later

I never finished writing this up. My conclusion at the time was that although it is possible to implement your own futures from scratch without relying on internal future polling, I couldn’t really come up with any other method than busy-waiting that also does not employ another thread.

For my stated goal of reducing the number of threads in redshift-rs, I ended up experimenting with mio which actually allowed me to implement a single-threaded signal handling. The result can be found in the mio-signal branch

Going through the Eudyptula Challenge

I have been slowly going through the awesome Eudyptula challenge which unfortunately has stopped accepted new participants.

The Eudyptula Challenge is (was) a Linux kernel programming challenge that takes you from creating your first “Hello, World!” module to submitting patches to the Linux kernel itself.

At the moment of writing this, I have submitted task 11 and awaiting a reply. The last submission I made was back in April 2017, so I’m progressing rather slowly. I intend to complete it eventually, but as Little Penguin says: “Remember, this is not a race, there is no rush [and] Have fun!”.

I already have a patch accepted in the linux-next tree (a small one, fixing a code style issue).

What can be said about the challenge?

  • The majority of the time is spent reading kernel source and documentation
  • The most difficult aspect to get right is the process. Understanding the cycle of submitting patches via a sane e-mail client. Fortunately, tools are built right into git to help with this. git format-patch and git send-email are useful tools
  • Read code! The kernel is a huge project

Why did I start the challenge? Well, to begin with, I love the Linux kernel and it woud be a dream to work with it some time in the future. I’m also a big fan of open-source development as the shared-knowledge development is much more appealing to me. Finally, I just want to learn — the Linux kernel is one of largest open projects and a lot of developer time has gone into it. So I figure there is a lot of useful stuff to learn.

My setup

When I began the challenge, I decided to install Gentoo on my machine and install experimental kernels on it. I have had the suggestion a couple of times just to use a virtual machine, but somehow that feels like cheating a little. I have also used Gentoo before but never really as my primary operating system, so I figured this might be a good opportunity to get more into it.

For kernel sources, I use:

git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

which should be the official tree.

A regular kernel install roughly goes as (after pulling in the latest commits):

$ make oldconfig
$ make -j4
$ sudo make modules_install
$ sudo make install
$ sudo grub-mkconfig -o /boot/grub/grub.cfg

Then I can reboot and have the newly installed kernel available.

A good working practice (for any Git project, really) is always to create a new branch for some new line of work. If the work doesn’t turn out right you can always return to master and delete the work branch.

Pijul, Version Control System in Rust

I just discovered Pijul – a new version control system, built around a solid mathematical foundation (a categorical theory of patches) in the spirit of darcs, written in Rust. Pijul works on patches like darcs which is unlike systems like Git that are based on snapshots. One thing to remember is:

  • There is no explicit ordering of patches, just dependencies

Committing

After making a some change and deciding to commit the change, execute

$ pijul record

Pijul will query for each individual change if it should be recorded or not (like a git commit --interactive)

Shall I record this change? [ynkad]

The choices are yes, no, skip, add, delete.

Getting started

$ pijul init pjt; cd pjt
$ echo 'fn main() { println!("Hello, world!"); }' > main.rs
$ pijul add main.rs
$ pijul diff
added file /home/t/pijul/pjt/main.rs

In file "/home/t/pijul/pjt/main.rs"

+ fn main() { println!("Hello, world!"); }

$ pijul record
added file /home/t/pijul/pjt/main.rs

Shall I record this change? [ynkad] y

In file "/home/t/pijul/pjt/main.rs"

+ fn main() { println!("Hello, world!"); }

Shall I record this change? [ynkad] y

What is your name <and email address>? Thomas Jespersen <laumann@..>
What is the name of this patch? Initial commit
Recorded patch AfbOKpDOn8URmICsTIjBHdlygU7bAoDGwSwPI3ik4XrtIO1uWZ4Xcl5I8RF8B_sxk1q7ib0lMzv0QO2cOeWhpc8

The above steps go through initialising our repository pjt and recording our first patch.

To see the patches currently recorded on the active branch, run pijul changes:

$ pijul changes
hash AfbOKpDOn8URmICsTIjBHdlygU7bAoDGwSwPI3ik4XrtIO1uWZ4Xcl5I8RF8B_sxk1q7ib0lMzv0QO2cOeWhpc8
Authors:   ["Thomas Jespersen <laumann@..>"]
Timestamp: 2017-05-06 18:53:36.600768886 UTC

    Initial commit

Your mileage may vary here! I’ve been playing with formatting the output of changes to look like git log because the added whitespace and aligned header fields aid in visually separating patches.

Let’s edit our main.rs file to format it a little better:

fn main() {
    println!("Hello, world!");
}

and execute pijul diff:

$ pijul diff
In file "/home/t/pijul/pjt/main.rs"

- fn main() { println!("Hello, world!"); }
+ fn main() {
+     println!("Hello, world!");
+ }

No surprises. Just record it:

$ pijul record -a -m "Proper formatting of main.rs"
Recorded patch ASZp-tVdBdDd5tDi3vZ-UkD80Pb7dAjb-YHhS6of-OENoku7gNW8w8Bk1xH1mq-lX8OwqQ84kqqp76QMRr6723U

Now we have two changes:

$ pijul changes
hash ASZp-tVdBdDd5tDi3vZ-UkD80Pb7dAjb-YHhS6of-OENoku7gNW8w8Bk1xH1mq-lX8OwqQ84kqqp76QMRr6723U
Authors:   ["Thomas Jespersen <laumann@..>"]
Timestamp: 2017-05-06 19:19:42.582551303 UTC

    Proper formatting of main.rs

hash AfbOKpDOn8URmICsTIjBHdlygU7bAoDGwSwPI3ik4XrtIO1uWZ4Xcl5I8RF8B_sxk1q7ib0lMzv0QO2cOeWhpc8
Authors:   ["Thomas Jespersen <laumann@..>"]
Timestamp: 2017-05-06 18:53:36.600768886 UTC

    Initial commit

Like Git, the top-level directory of a Pijul repository contains a .pijul directory. For our example project:

$ ls -F .pijul
patches/  pristine/  changes.bWFzdGVy  id  meta.toml  version
  • id — A 100-character randomly generated ID
  • version — records the Pijul version, ie 0.5.5
  • meta.toml — preferences are recorded here (a la .git/config). Currently just records default authors, and push/pull repositories.
  • changes.bWfzdGVy — contains a hash table of patches applied to the branch named by bWfzdGVy, which is the hex encoding of master. A changes.<hex> file exists for each branch.
  • patches/ — contains gzipped patches by name <hash>.gz.
  • pristine/ — contains a Sanakirja database that contains a number of tables dealing with patches, files and content. An overview of the format can be found here

A few things to note

  • Pijul allows for a patch to have multiple authors.
  • Pijul records patch dependencies and it is interesting to note that pijul changes does not necessarily output the patch headers in chronological order, but their dependency ordering is respected.

Developer Notes

Build Pijul from source and add pijul/pijul/target/debug to your PATH:

$ export PATH="/home/t/pijul/pijul/pijul/target/debug:$PATH"

For debugging output when running Pijul commands use RUST_LOG, eg:

$ RUST_LOG=libpijul,pijul pijul <command>

Pijul can also output the dependency graph as a dot digraph, by running pijul info --debug. The following image was obtained from our example repository. I don’t yet understand exactly how the graph describes the repository, so I’ll leave that for an update or a later post.

How to derive Show

Disclaimer All the code quoted in this post is extracted from the Rust compiler source code. Most snippets are annotated with the file name relative to $RUST_ROOT/src.

Rust is an awesome language. In the beginning I had a lot of trouble coming to terms with some of the decisions taken in the language specification. But a little over a year later, I must admit that I am enjoying it. To be honest, it is probably not only the language itself, but the community around it – there are a lot of opinions, but the general tune is “we want to make the best systems programming language possible”. And that, of course, will leave some less content and some very content, but all in all I there is a lot of excitement around it.

Recently, I have been interested in writing compiler plugins, as I think they will come in handy for my Master’s project. The documentation is a little sparse, only grazing the surface, but it’s probably for the better as that whole section of the compiler is still marked as unstable. On the other hand, I doubt it will change drastically before 1.0, as quite a few projects make use of it as it is (to great effect I might add).

Looking through the Rust source code, trying to learn about macro expansions, I naturally find a definition of the different types of syntax extensions available in Rust (file: libsyntax/ext/base.rs), defined as enum SyntaxExtension:

  • Decorator: A syntax extension attached to an item, creating new items based on it.
  • Modifier: Syntax extension attached to an item, modifying it in-place.
  • MultiModifier: Same as above, but more flexible (whatever that means)
  • NormalTT: A normal, function-like extension, for example bytes! is one such
  • IdentMacroExpander: As a NormalTT, but has an extra ident before the block.
  • MacroRulesTT: Represents macro_rules! itself.

How interesting. Then a question popped up: How is a standard derivable trait such as Show actually derived?

First of all, in the same file, there is a function defining all the basic syntax extensions, initial_syntax_expander_table() in which we find the following lines:

fn initial_syntax_expander_table(ecfg: &expand::ExpansionConfig) -> SyntaxEnv {
    // ...
    let mut syntax_expanders = SyntaxEnv::new();

    // ...
    syntax_expanders.insert(intern("derive"),
                            Decorator(box ext::deriving::expand_meta_derive));
    // ...
}

which tells us that the “derive” functionality is registered as a decorator (which makes sense), and expands to call the function expand_meta_derive(). This function is defined in libsyntax/ext/deriving and is not much different from any other syntax extension.

// File: libsyntax/ext/deriving/mod.rs
pub fn expand_meta_derive(cx: &mut ExtCtxt,
                          _span: Span,
                          mitem: &MetaItem,
                          item: &Item,
                          mut push: Box<FnMut(P<Item>)>) {
    // ...
}

First it checks the node type of mitem. If it is not a list or an empty list, an error is emitted. Otherwise all the items are inspected in turn. This gives us exactly what derive can derive:

// File: libsyntax/ext/deriving/mod.rs in function expand_meta_derive()
match tname.get() {
    "Clone" => expand!(clone::expand_deriving_clone),

    "Hash" => expand!(hash::expand_deriving_hash),

    "RustcEncodable" => {
        expand!(encodable::expand_deriving_rustc_encodable)
    }
    "RustcDecodable" => {
        expand!(decodable::expand_deriving_rustc_decodable)
    }
    "Encodable" => {
        cx.span_warn(titem.span,
                     "derive(Encodable) is deprecated \
                      in favor of derive(RustcEncodable)");

        expand!(encodable::expand_deriving_encodable)
    }
    "Decodable" => {
        cx.span_warn(titem.span,
                     "derive(Decodable) is deprecated \
                      in favor of derive(RustcDecodable)");

        expand!(decodable::expand_deriving_decodable)
    }

    "PartialEq" => expand!(eq::expand_deriving_eq),
    "Eq" => expand!(totaleq::expand_deriving_totaleq),
    "PartialOrd" => expand!(ord::expand_deriving_ord),
    "Ord" => expand!(totalord::expand_deriving_totalord),

    "Rand" => expand!(rand::expand_deriving_rand),

    "Show" => {
        cx.span_warn(titem.span,
                     "derive(Show) is deprecated \
                      in favor of derive(Debug)");

        expand!(show::expand_deriving_show)
    },

    "Debug" => expand!(show::expand_deriving_show),

    "Default" => expand!(default::expand_deriving_default),

    "FromPrimitive" => expand!(primitive::expand_deriving_from_primitive),

    "Send" => expand!(bounds::expand_deriving_bound),
    "Sync" => expand!(bounds::expand_deriving_bound),
    "Copy" => expand!(bounds::expand_deriving_bound),

    ref tname => {
        cx.span_err(titem.span,
                    &format!("unknown `derive` \
                             trait: `{}`",
                            *tname)[]);
    }
}

Straight from the heart (or kidney) of the beast! Not only do we clearly see that Show is supported, mapping to the function expand_deriving_show, we also see that it comes with a deprecation warning, and we should prefer Debug over Show. At the moment there is no difference, as they both map to the same function.

We are getting close to the end here. Instead of explaining what goes on I am going to quote the entire function expand_deriving_show:

pub fn expand_deriving_show<F>(cx: &mut ExtCtxt,
                               span: Span,
                               mitem: &MetaItem,
                               item: &Item,
                               push: F) where
    F: FnOnce(P<Item>),
{
    // &mut ::std::fmt::Formatter
    let fmtr = Ptr(box Literal(Path::new(vec!("std", "fmt", "Formatter"))),
                   Borrowed(None, ast::MutMutable));

    let trait_def = TraitDef {
        span: span,
        attributes: Vec::new(),
        path: Path::new(vec!["std", "fmt", "Debug"]),
        additional_bounds: Vec::new(),
        generics: LifetimeBounds::empty(),
        methods: vec![
            MethodDef {
                name: "fmt",
                generics: LifetimeBounds::empty(),
                explicit_self: borrowed_explicit_self(),
                args: vec!(fmtr),
                ret_ty: Literal(Path::new(vec!("std", "fmt", "Result"))),
                attributes: Vec::new(),
                combine_substructure: combine_substructure(box |a, b, c| {
                    show_substructure(a, b, c)
                })
            }
        ],
        associated_types: Vec::new(),
    };
    trait_def.expand(cx, mitem, item, push)
}

This is beautiful! Deriving Show looks a lot like we had written it by hand. We have a trait definition for std::fmt::Debug with no additional bounds nor generics. There is one method called fmt that takes &self (borrowed explicit self) and a pointer to a std::fmt::Formatter as arguments. The return type is std::fmt::Result.

This is not the end however, since this does not mention the name of the structure we are trying to derive Show for. This must take place in trait_def.expand(). This function expands the trait definition, ensuring that the derived-upon item is either a struct or an enum, taking care of various possible error conditions, juggling lifetimes, generics, where clauses and associated types.

All this boils down to the following item creation:

cx.item(
    self.span,
    ident,
    a,
    ast::ItemImpl(ast::Unsafety::Normal,
                  ast::ImplPolarity::Positive,
                  trait_generics,
                  opt_trait_ref,
                  self_type,
                  methods.into_iter()
                         .map(|method| {
                             ast::MethodImplItem(method)
                         }).chain(
                             associated_types.map(|type_| {
                                 ast::TypeImplItem(type_)
                             })
                         ).collect()))

which I will not even pretend to understand. We can conclude that we end up calling cx.item, creating a new item in the AST. The item() method is not defined on ExtCtxt itself, but rather declared in a trait AstBuilder, which is implemented for ExtCtxt.

impl<'a> AstBuilder for ExtCtxt<'a> {
    // ...
    fn item(&self, span: Span, name: Ident,
            attrs: Vec<ast::Attribute>, node: ast::Item_) -> P<ast::Item> {
        // FIXME: Would be nice if our generated code didn't violate
        // Rust coding conventions
        P(ast::Item {
            ident: name,
            attrs: attrs,
            id: ast::DUMMY_NODE_ID,
            node: node,
            vis: ast::Inherited,
            span: span
        })
    }
    // ...
}

So there you have it. How Show (or Debug) gets derived in Rust. It is a rather long story, with some gaps, but it is very instructive to skip around the compiler infrastructure to see how some of the AST-mangling syntax extensions do their work.

If you stuck with it this far, thanks for reading, hope you enjoyed it.

Pushing and tracking a local branch

Workflow note: A fairly common workflow pattern has established itself:

  • Create local branch, call it fx for “feature x”
  • Work on it for a while (committing frequently)
  • Push it to origin
  • Periodically merge master into it
  • Eventually merge it back into master

But I tend to forget some of the commands I need to type (especially when dealing with remote tracking branches). This is a quick run-down of the common commands.

$ git checkout -b fx

Creates and checks out fx branch.

The biggest problem sometimes is pushing this new branch to a remote. Very often I’ll just do:

$ git push origin fx

which achieves exactly that, but there is no remote-tracking, ie. something like the following is missing from .git/config:

[branch "fx"]
	remote = origin
	merge = refs/heads/fx

which we can fix in a few ways. One way is simply adding the section to your config file, which is probably best to do through the CLI:

$ git config branch.fx.remote origin
$ git config branch.fx.merge refs/heads/fx

or simply be smart enough to include -u when pushing the branch the first time:

$ git push -u origin fx

which takes care of setting exactly these tracking parameters in the configuration.

From the Rust Book of IO

This little gem showed up while perusing the Rust source code (src/libstd/io/stdio.rs):

And so begins the tale of acquiring a uv handle to a stdio stream on all
platforms in all situations. Our story begins by splitting the world into two
categories, windows and unix. Then one day the creators of unix said let
there be redirection! And henceforth there was redirection away from the
console for standard I/O streams.

After this day, the world split into four factions:

1. Unix with stdout on a terminal.
2. Unix with stdout redirected.
3. Windows with stdout on a terminal.
4. Windows with stdout redirected.

Many years passed, and then one day the nation of libuv decided to unify this
world. After months of toiling, uv created three ideas: TTY, Pipe, File.
These three ideas propagated throughout the lands and the four great factions
decided to settle among them.

The groups of 1, 2, and 3 all worked very hard towards the idea of TTY. Upon
doing so, they even enhanced themselves further then their Pipe/File
brethren, becoming the dominant powers.

The group of 4, however, decided to work independently. They abandoned the
common TTY belief throughout, and even abandoned the fledgling Pipe belief.
The members of the 4th faction decided to only align themselves with File.

tl;dr; TTY works on everything but when windows stdout is redirected, in that
        case pipe also doesn't work, but magically file does!

I especially like that the TL;DR is located at the bottom.

Credit: From what git blame tells me, the above quote was authored by Alex Crichton

More notes on GNU Autotools

Here’s a great slide show explaining how to use GNU Autotools for a given project:

from this site.

Personally, it it very confusing to set up Autotools for the first time. The introductory texts available online rarely provide a full picture, since Automake and Autoconf are two different tools that just happen to be orchestrated together very often. On top of that you have commands such as autoheader and autoreconf (the latter which is in some places warned against for some reason).

There are even fewer examples on how to set up a library. Apart from the aforementioned tools, there is also Libtool which should alleviate head-aches when building a library.

But to be honest, it all seems to be a matter of taste.

The goals of my project are:

  • Building both statically and dynamically linkable libraries (.a and .so respectively)
  • Building cross-platform, preferably according to some C standard (to improve portability)

Notable projects that serve as inspiration points are:

  • Tig: A text-mode interface for Git
  • GMP: The GNU Multi Precision Arithmetic library
  • libgit2: A portable, zero-dependencies C implementation of core Git methods

Notably, libgit2’s zero dependency and C89 compliant implementation makes is attractive as a source of inspiration, but for their build process, they have for some unfathomable reason chosen CMake instead of make.

Rather, in order to achieve the goals listed above, I believe I could make do with autoconf and autoheader (but not automake) a la Tig, and generate a config.make, which could be fed into the Makefile.

GNU Autotools and friends

If you want to write a library for distribution on most Un*x-like systems, chances are you’ll want to use GNU Autotools. I have for a long time been curious how these tools work, and how the seemingly indecipherable syntaxes of Makefile.am and configure.ac were interpreted. And what is the relationship between automake, autoconf and aclocal?

The following online book is well worth the read:

So far the general idea is the following:

  • aclocal generates an aclocal.m4 by scanning configure.ac (from the man page)
  • autoconf is for generating ./configure which figures out the configuration of the installation system; while
  • automake is for generating a Makefile.in, a Makefile template

A common pattern in C header files is the following:

/* File: some-header.h */
#ifdef __cplusplus
extern "C" {
#endif

/* Contents of header file */

#ifdef __cplusplus
}
#endif

which allows the C code to be used from C++. The author suggests instead to have a common header file with the following ifdef magic:

/* File: common.h */
#ifdef __cplusplus
# define BEGIN_C_DECLS extern "C" {
# define END_C_DECLS }
#else
# define BEGIN_C_DECLS 
# define END_C_DECLS
#endif

And rewrite the above as

/* File: some-header.h */
#include <common.h>
BEGIN_C_DECLS

/* Contents of header file */

END_C_DECLS

Notes on Linear Logic and Types

I think I need to know something about linear types in order to understand session types, so here is some of the reading material I’ve consulted so far:

A great paper by Philip Wadler, a really good introductory text on linear logic and linear types.

A more pragmatic direct introduction to linear types (not so much about linear logic). Motivated in a functional setting (with LISP).

A more (seemingly) authoritative piece on linear logic, but I have not been able to print it yet—the printer outputs some garbled version. The only part that prints nice is the front page.