Like the last time, this post describes both the final result and the road there. So, there will be lots of code and lots of diffs, beware!

]]>- I just write reasonably efficient code without getting too deep into low-level details to get the job done in a couple of hours.
- I try to push the envelope and see what could be done if one’s actually willing to go into those details (within some limits, of course, so no GHC hacking!)

Turns out that yes, it’s indeed possible to write something with C-level performance in a matter of a couple of hours. Moreover, Haskell’s type system shines here: class-constrained parametric polymorphism enables using the same decoder implementation for pixels with very different representations, allowing to squeeze as much performance as is reasonably possible without duplicating the code.

In this post, I’ll describe the Haskell implementation of the decoder, and the steps I took to get from (1) to (2) for the decoder.

]]>`stack`

-based projects).
Spoiler: it’s much, much more straightforward than a few years ago, almost to the point of “vim and Haskell” posts being no longer necessary.

]]>One huge source of inconsistency is non-terminating computations; hence languages like Idris or Agda go to great lengths to ensure that functions indeed do terminate. But, for deep reasons, a purely automated check having neither false positives nor false negatives just does not exist, so compromises must be made. Naturally, when talking about proofs, it’s better to be safe than sorry, so these languages strive to never label a function that doesn’t really terminate for all inputs as terminating. Consequently, this means that there are terminating functions that the termination checker does not accept. Luckily, these functions can be rewritten to make the checker happy if all the recursive calls are "smaller" in some sense.

This post emerged from me trying to persuade Agda that a bunch of mutually recursive functions are all terminating. I went through the Agda’s standard library to figure out how to do this, taking notes about what different abstractions I encountered mean and expand to. Then I figured that, if I pour some more words into my notes, it might turn out to be useful for somebody else, so, well, here it is.

]]>On the other hand, there is a `GHC.Stack`

module (by the way, described as Access to GHC’s call-stack

, italics ours) as well as some mechanism for capturing something called *simulation*`CallStack`

s. How do those call stacks connect with the graph reduction model? Let’s maybe carry out a few *computational* experiments all while keeping track of the obstacles we hit, shall we?

`wc`

-like program and we’ve also compared its performance against the full-blown Unix `wc`

. The results were quite interesting: our implementation managed to beat `wc`

by a factor of 5. Of course, that’s quite an unfair comparison: our implementation is hardcoded to count just the bytes, lines and words. `wc`

, on the other hand, has command-line options to select specific statistics, it supports some additional ones like maximum line length, it treats Unicode spaces properly (in an Unicode-aware locale, of course), and so on. In other words, it’s better to consider what we’ve done last time as a proof-of-concept showing that it’s possible to achieve (and overcome) C-like performance on this task, even if with all those concessions.
Today we’ll look at ways of productionizing the toy program from the previous post. Our primary goal here is allowing the user to select various statistics, computing just what the user has selected to compute. We’ll try to do this in a modular and composable way, striving to isolate each statistic into its own unit of some sorts.

Indeed, if we look at the C version — well, personally I wouldn’t call that as a prime example of readable and maintainable code, as different statistics are computed in a single big 370-lines-long function. This is something we’ll try to avoid here.

Moreover, we’ll try to express that certain statistics like byte count or lines count can be computed more efficiently if we don’t have to look at each byte, while other statistics like word count or max line length just *need* to look at each byte one by one (unless one does some clever and non-trivial broadword programming or SIMD-enabled things, which is beyond the scope of this post). For instance, byte count can be computed in `O(1)`

if we know we’re reading from a file — we can just take the file size and call it a day!

In addition to that, we will, among other things:

- implement more statistics with ease, enjoying local reasoning;
- throw up some tests, enjoying local reasoning once more;
- try out some kinda-dependently-typed techniques, successfully obtaining working code but failing spectacularly on the performance side of things;
- play around with Template Haskell;
- marvel at the (un)predictability and (un)reproducibility of the resulting code performance.

`wc`

command that is about 4-5 times faster than the corresponding GNU Coreutils implementation.
So I’ve recently come across a post by Chris Penner describing a Haskell implementation of the Unix `wc`

command. Chris did a great job optimizing the Haskell version as well as showing how some high-level primitives (monoids and streaming, for one) turn out to be useful here, although the result was still a bit slower than C. There’s also a parallel version that relies on the monoidal structure of the problem a lot, and that one actually beats C.

But that post left me wondering: is it possible to do better without resorting to parallel processing?

Turns out the answer is yes. With some quite minor tweaks, the Haskell version manages to beat the hell out of the C version that presumably has decades of man-hours put into it.

]]>We will also compare this to a baseline C++ implementation.

Spoiler alerts:

- C++ implementation turns out to be slower than the fastest Haskell implementation.
- LLVM backend really shines here.

`Has`

pattern, the next obvious question is if we can generalize further. And, turns out, we can!
In this post we’ll see how some algebraic considerations help us to discover one more pattern useful with `MonadError`

(and a `Generic`

implementation thereof), and we’ll also update our `Has`

class with one more method that brings it closer to something lens-like and makes it useful with writable environments like `MonadState`

.

`Has`

pattern, the problems that it solves, and we also wrote a few instances for our `Has`

-like classes:
```
data AppConfig = AppConfig
dbConfig :: DbConfig
{ webServerConfig :: WebServerConfig
, cronConfig :: CronConfig
,
}
instance HasDbConfig AppConfig where
= dbConfig
getDbConfig instance HasWebServerConfig AppConfig where
= webServerCOnfig
getWebServerConfig instance HasCronConfig AppConfig where
= cronConfig getCronConfig
```

Looks good so far. What could be the problems with this approach?

`Has`

Let’s think what other instances we might want to write.

The configs themselves are obviously good candidates for (trivially) satisfying the corresponding classes:

```
instance HasDbConfig DbConfig where
= id
getDbConfig instance HasWebServerConfig WebServerConfig where
= id
getWebServerConfig instance HasCronConfig CronConfig where
= id getCronConfig
```

These instances allow us to, for example, write separate tests (or utilities like a service tool for our DB) that don’t require the whole of `AppConfig`

.

This is already getting a bit boring, but hold on. Some integration tests might also involve a pair of modules, and we still don’t want to pull the whole application configuration into *all* of the modules, so we end up writing a few instances for tuples:

```
instance HasDbConfig (DbConfig, b) where
= fst
getDbConfig instance HasDbConfig (a, DbConfig) where
= snd
getDbConfig
instance HasWebServerConfig (WebServerConfig, b) where
= fst
getWebServerConfig instance HasWebServerConfig (a, WebServerConfig) where
= snd
getWebServerConfig
instance HasCronConfig (CronConfig, b) where
= fst
getCronConfig instance HasCronConfig (a, CronConfig) where
= snd getCronConfig
```

Ugh. Let’s just hope we will never need to test three modules at once so we won’t need to write nine dull instances for 3-tuples.

Anyway, if you’re anything like me, this amount of boilerplate will make you seriously uncomfortable and eager to spend a few hours looking for ways to delegate this to the compiler instead a couple of minutes of writing the necessary instances.

]]>