Guile Hoot with Fibers
Until recently the Scheme programming language in general, and Guile in particular, had no facilities for coding front-end web applications. Guile Hoot (with Fibers) addresses this gap.
Guile Hoot
is an implementation of Guile running on the browser via WebAssembly compilation; Fibers
is a library that provides concurrency for Guile Scheme in the tradition of Concurrent ML. Luckily for us, Guile Hoot
comes with Fibers
out of the box, so coding clean, event-driven browser applications using Guile is now possible.
David Thompson has an excellent Fosdem 2025 presentation on these new capabilities. He also has written a blog article, Building Interactive Web Pages with Guile Hoot, on how to use Hoot for building interactive web pages. Dave’s piece, in conjunction with Hoot’s documentation will provide some of the necessary background for the reader, particularly if they want to play with the code.
Proof of Concept
Stricly speaking, we implement an event-driven architecture. See HtdP, Section 2.5 for an elementary discussion of event-driven interactive programs. Using tic-tac-toe as an example (click here for a live demo), we show how to implement a functional web programming pattern, the FLUX architecture, with Guile Hoot
and Fibers
. Among other things, we demonstrate atomic state changes, time-travel, concurrent programming as a way to separate concerns, and a simple but convenient approach for rendering HTML.
I couldn’t find much introductory material for the concurrency model implemented in Fibers
, so I include towards the end of this post a section that reviews basic Fibers
concepts, followed by a detailed discussion of the asynchronous queue that we use for event processing.
For now, all we need to know about this asynchronous queue is that event messages can be enqueued onto this queue by calling (put-message enq-ch evt)
and dequeued with (get-message deq-ch)
. Note that dequeueing an event will block if the queue is empty; enqueueing events never blocks.
Our Tic-Tac-Toe
Below a picture of a game in progress:

For our simple game, we display a header, a status line that tells us what’s happening in the game (more on this below), a board with the moves by the players, and a time-travel stack of buttons. In the particular game being displayed, the players have executed 6 moves; we are looking at the state of the board after the 3rd move, having clicked on the time-travel button corresponding to MOVE: 3.
At this point, a player can click on another time-travel button to go backwards or forwards in time, or click a square on the board, which will result in a new move if the square is empty, with the corresponding adjustment to the time-travel buttons.
The FLUX Architecture
This article gives a good overview of the FLUX architecture in the context of Facebook and React. We use the idea of unidirectional flow and stay away from the React framework and its ecosystem. The FLUX architecture was championed by Facebook as the basic software pattern for coding client web applications, which led to ReactJS and its ecosystem. The basic idea is quite simple: implement a unidirectional data flow as opposed to the traditonal MVC approach:

Browser events–wherever they are captured–are converted into a data structure that encodes the information about the event into an event message. This message is then put on a queue. An event loop listens for messages on this queue, dispatches the message to a handler and then makes changes to state as needed. The last step is to trigger the re-rendering of the application, reflecting the changes made to state. State changes take place as response to events in the context of the event loop exclusively. A new event will start the cycle all over again.
These ideas in code; assume that event messages are encoded and put into an event queue:
(lambda (resolved rejected)
(call-with-async-result
resolved rejected
(lambda ()
;; set up asynchronous channels
(let-values (((enq-ch deq-ch stop?) (inbox)))
;; instantiate the application
(let ((state (make-atomic-box (make-initial-state)))
(app (tic-tac-toe enq-ch deq-ch stop?)))
;; send start message
(put-message enq-ch (make-event 'start #f))
;; event loop
(let forever ((evt (get-message deq-ch)))
(begin
;; modify global state per event
(handle-events state evt)
;; render *after* new state
(render (app (atomic-box-ref state)))
(forever (get-message deq-ch)))))))))
See the Guile Hoot Fibers documentation for a discussion of the required boilerplate code. Ignoring the scafolding and zooming in on the thunk, we see that an application of the function inbox
results in the creation of two Fiber
channels, enq-ch
and deq-ch
, as well as stop?
, a condition variable that can be used by Fibers
. These channels are used to provide the queue used for handling events.
Technically, we only need to pass enq-ch
to tic-tac-toe
. Next, we invoke the function tic-tac-toe
with these Fibers
variables as parameters to instantiate app
, a dynamic templating function that will return pseudo-HTML as a function of the state value passed to it. More precisely, app
takes state as input and generates SXML, Guile’s native representation of XML that is very handy for HTML. The pseudo-HTML will be converted into actual HTML and injected into the DOM during rendering. Thus, app
is the dynamic component or application that is updated and rerendered in response to events.
Now we are ready to get the ball rolling: first, the start event is put on the event queue, and then we start an event loop that: a) receives the next event from the queue, b) invokes an event dispatcher to modify state based on payload, c) renders the dynamic template using the new state value, and d) returns to listening to the queue for new events. Note that putting the start event on the queue makes possible the first render of the application: otherwise, the event loop would keep on blocking waiting for an event. At the start of the game, the loop is waiting for events, which in our case are mouse clicks on the squares of the tic-tac-toe board.
We see immediately that this event loop corresponds exactly to the diagram of the FLUX architecture sketched above.
Dynamic Templating
After the start event, all the other events processed correspond to mouse clicks, either on squares on the board or on the time-travel buttons. The handlers for these events are coded into the dynamic template we use for specifying the look, feel, and actions of our web application. We discuss in the next section what exactly these handlers do. But first, let’s review how dynamic templating works in Guile.
As mentioned already, we rely on Dave Thompson’s work for HTML rendering, explained in his excellent blog post Building Interactive Web Pages with Guile Hoot, which I recommend the reader study in detail. For our purposes, his piece can be conceptually split into two pieces: templating and rendering.
For templating, the main idea is to use quasi-quoting, a standard Guile feature, to describe the HTML of components and their actions (e.g, click
) in Guile code. The main advantage of this approach is that we have the full power of Guile for specifying dynamic components, as opposed to the limited DSLs used by templating engines.
Static components are simple enough: for example, below the code for the title at the top of the page:
(define header
`(h1 "Tic-Tac-Toe"))
This looks very much like the corresponding HTML code–sans some of the angle brackets.
Dynamic components are a little more complicated but still straightforward for a Guile programmer. Consider the component that shows the current status of the game, which tells players whether the game has been won, tied, or which player gets to play next:
(define (status state)
(let* ((board (get-current-board state))
(winner (win-game board))
;; tie if every square has a token and winner is #f
(tie (every identity (vector->list board)))
(msg (cond
(winner (string-append "Winner: " winner))
(tie "Game tied!")
(else (string-append "Next Player: "
(token state))))))
`(div (@ (class "status")) ,msg)))
This function takes the current state as input, retrieves the value of the current board, and computes whether there is winner or a tied game. It then uses one of these values to construct msg
, a string with the corresponding value, unless neither hold, in which case msg
indicates which player goes next. Simple stuff, normal Guile.
Browser Events
We saw above that the event loop takes coded events from a queue for processing. In this section we show how those events are placed in the queue.
Our tic-tac-toe board is a collection of squares:
(define (display-square state i)
(let ((square (vector-ref (get-current-board state) i)))
(or square "")))
(define (square state i)
`(div (@ (class "grid-margin")
(click ,(lambda (event)
(put-message enq-ch (make-event 'move i)))))
,(display-square state i)))
(define (board state)
`(div (@ (class "table-width"))
,@(map (lambda (i)
(square state i)) (iota 9))))
Reading from bottom to top, the board is a div
that holds nine squares, each of which is a div
with a click
handler to register a player’s move.
Any browser event could be thus translated into our event record type using an appropriate event type encoding. Our setup doesn’t require us to get data from the mouse click event (e.g., x- and y- positions) but that’s easy enough to do–look at Dave Thompson’s piece for guidance. This is where the magic happens: the handler converts a browser click into an event, a record type that holds the type (move in this case) and a payload–the data associated with the event–the square’s index in our case.
This transmutation happens inside the call to put-message
as it puts the event on the event queue, which makes the conversion thread-safe. Put it another way, the handler converts a physical action into a datum.
Finally, the square
function uses display-square
to display a representation of the square–a player token or a blank space.
The sharp reader will have noticed that the event loops takes events from the deq-ch
channel with get-message
whereas the click handler above puts them into the enq-ch
channel with put-message
.
An event is a record-type consisting of two slots: a type, and a payload. Our game recognises three types of event: move events with a payload corresponding to the index of the square on the board being clicked, travel events for which the payload is similarly the index of the time-travel button clicked, and a start event with payload value of #f
, which is ignored. The start event is generated right before the event loop is instantiated, and it is processed exactly once.
Just like before, we code time-travel events via a handler:
(define (display-travel-square i)
(if (zero? i)
"Game Start"
(string-append "Move: " (number->string i))))
(define (travel-button i)
`(div (@ (class "travel-button")
(click ,(lambda (event)
(put-message enq-ch (make-event 'travel i)))))
,(display-travel-square i)))
(define (time-travel-moves state)
(let ((history (get-history state)))
;; how to do css styles on elements
`(div (@ (style "padding-top:0.10rem;"))
,@(map travel-button (iota (get-total-boards state))))))
As it was the case earlier for move events, the click
event handler will put a time-travel event on a queue using put-message
for handling by the event loop. Displaying the time-travel buttons is done similarly as before: we build a string corresponding to the time-travel button.
Handling Events
Going back to the heart of our event loop, events are sent to a handler when they are pulled off the queue. These functions change state as a side-effect. This handler is pretty simple–all it does is invoke state-change functions corresponding to the specific event:
(define (handle-events state evt)
(match evt
(($ <event> 'move payload)
(atomic-box-update! state make-move payload))
(($ <event> 'travel payload)
(atomic-box-update! state time-travel payload))
(($ <event> 'start payload) #f)
((_ #f) #f)))
Clojure has function for updating atoms with the same signature. Here is a link to a version of atomic-box-update!
proposed by Andrew Tropin for inclusion in the atomic-box
Guile module in the Guile
discussion list. The key function here is atomic-box-update!
, our extension to the standard ice-9 atomic
module, that takes as inputs the current state, a state-change function, and rest arguments:
(define (atomic-box-update! box proc . args)
(let loop ((old-val (atomic-box-ref box)))
(let* ((new-val (apply proc old-val args))
(cas-val (atomic-box-compare-and-swap! box
old-val
new-val)))
(if (eq? old-val cas-val)
new-val
(loop cas-val)))))
atomic-box
takes the value of state held in box
, an atomic box, applies proc
and args
to state, and then puts it back in the atomic box, changing state in place. The use of an atomic-box is not necessary for our example because WebAssembly (and therefore Hoot) does not support multithreading presently. Nevertheless, we show how to use atomic boxes because the same pattern could be used server-side where Guile (i.e., Fibers
) supports true multithreading. It also makes our code future-proof should WebAssembly support multicore execution some time later.
The two functions used above for state change are make-move
and time-travel
. Let’s examine them:
Changing State
State for our application consists of a record-type with three slots: history, a vector with 10 entries for boards (the starting empty board plus one board for each move); move, a slot that holds the index into history to the current board; and total-boards
, the number of played boards in our history, including the initial empty board. Note that with time-travel move can hold any value from 0 to total-boards
$\, - \, 1$.
State is changed by the application of either of two functions to current state and event payload, with the first one being make-move
:
(define (make-move state i)
(let* ((history (get-history state))
(move (get-move state))
(board (vector-copy (vector-ref history move))))
(if (or (win-game board) ;; the game is over
(vector-ref board i)) ;; there's a token in square
state
(let* ((new-history (make-vector 10 #f))
(next-move (+ move 1))
;; account for initial board in total-boards
(total-boards (+ next-move 1)))
;; clear entries in history past current move
(vector-copy! new-history 0 history 0 next-move)
;; make the move in copy of current board
(vector-set! board i (token state))
;; add new board to history
(vector-set! new-history next-move board)
(make-state new-history next-move total-boards)))))
The lack of an efficient persistent vector data structure makes this code a bit clunky. We either rely on mutation, or we have to copy the entire vector. make-move
first checks to see if the proposed move is not possible, because either there is already a winner, or the square is already occupied. If so, then it returns the current value of state unchanged.
If it is possible to make a move onto square i
, it computes the components for updating state: it copies the current history
and adds a new board to it corresponding to the new move, increments the values of move
and total-boards
both, and then returns the updated value of state.
We change state to reflect time-travel clicks with time-travel
:
(define (time-travel state move)
(let ((history (vector-copy (get-history state)))
(total-boards (get-total-boards state)))
(make-state history move total-boards)))
time-travel
is quite simple: it takes the current state and returns a new value corresponding to the old history and total-boards
, as well as the (new) value passed in move
.
Back to Templating
We had mentioned above how Guile’s quasi-quoting can be used to create type of dynamic HTML. Below is the entire template for the web page–it should look familiar to anyone who has used React or Vue, or even plain HTML:
(define (tic-tac-toe enq-ch deq-ch stop?)
(define header
`(h1 "Tic-Tac-Toe"))
(define (status state)
(let* ((board (get-current-board state))
(winner (win-game board))
;; tie if every square has a token and winner is #f
(tie (every identity (vector->list board)))
(msg (cond
(winner (string-append "Winner: " winner))
(tie "Game tied!")
(else (string-append "Next Player: " (token state))))))
`(div (@ (class "status")) ,msg)))
(define (display-square state i)
(let ((square (vector-ref (get-current-board state) i)))
(or square "")))
(define (square state i)
`(div (@ (class "grid-margin")
(click ,(lambda (event)
(put-message enq-ch (make-event 'move i)))))
,(display-square state i)))
(define (board state)
`(div (@ (class "table-width"))
,@(map (lambda (i)
(square state i)) (iota 9))))
(define (display-travel-square i)
(if (zero? i)
"Game Start"
(string-append "Move: " (number->string i))))
(define (travel-button i)
`(div (@ (class "travel-button")
(click ,(lambda (event)
(put-message enq-ch (make-event 'travel i)))))
,(display-travel-square i)))
(define (time-travel-moves state)
(let ((history (get-history state)))
;; how to do css styles on elements
`(div (@ (style "padding-top:0.10rem;"))
,@(map travel-button (iota (get-total-boards state))))))
(define (container state)
`(div (@ (class "ttt-container"))
,(board state)
,(time-travel-moves state)))
(define (app state)
`(div
,header
,(status state)
,(container state)))
app)
One more time, reading from the bottom, tic-tac-toe
returns app
, a function that takes state
as its input and builds an HTML-like version of the web application. As we can see, it has a header
and status
components, followed by a container
, which holds the board
and the time-travel
component.
Dave’s code is found in the vdom.scm
file in the repository. How do we insert this into the appropriate DOM element to render the page? This is the job of the rendering code written by Dave Thompson (with a minor modification to take the application of app
with the global state as its parameter). We refer the reader to his blog article.
Testing
We can run our tests at the terminal with guile -L . test.scm
One of the advantages of encoding a physical event like a mouse click into an event message is that we can test the state changing components of the code base using synthetic events. The Guile file test.scm
shows how this can be accomplished.
We first define a couple of helper functions to convert data into events and then process them:
(define (event-seq evt-lst)
(map (lambda (event-msg)
(let ((evt (car event-msg))
(payload (cadr event-msg)))
(make-event evt payload)))
evt-lst))
(define (process-events state evt-lst)
(for-each (lambda (evt)
(handle-events state evt))
evt-lst))
Our events are simple lists of the form (type payload)
. The first function takes a list of synthetic events and translates them into event record-types; the second one processes the newly created events, potentially modifying state
with each. As mentioned above, the functions make-move
and time-travel
have side-effects. in fact, their sole purpose is to change the value of state.
We are now ready to do write some tests:
(let ((state (make-atomic-box (make-initial-state)))
(move-events '((move 0) (move 4) (move 3) (move 7) (move 6))))
;; handle a sequence of move events
(process-events state (event-seq move-events))
;; the current board should reflect the events
(test-equal (vector X #f #f X O #f X O #f)
(get-current-board (atomic-box-ref state)))
;; the move slot should be 5
(test-equal 5 (get-move (atomic-box-ref state)))
;; the total number of boards should be 6
(test-equal 6 (get-total-boards (atomic-box-ref state)))
;; process a travel event
(process-events state (event-seq '((travel 4))))
;; the current board should reflect the travel event
(test-equal (vector X #f #f X O #f #f O #f)
(get-current-board (atomic-box-ref state)))
;; the move slot should be 4
(test-equal 4 (get-move (atomic-box-ref state)))
;; the total number of boards should be 5
(test-equal 5 (get-total-boards (atomic-box-ref state)))
)
The let
bindings at the top allow us to define a sequence of synthetic events conveniently in the form of a list of lists, each with an event type and a payload. Starting from an initial state where the board is blank, we process the events to get to an intermediate state of play.
Our first test checks to make sure that the current board reflects the sequence of events. We also check the move slot to make sure it corresponds to the the 5 moves made and that the total number of boards is 6. Next, we process a time-travel event and check that the value of the current board matches our expectations and that the move slot has a value of 4, with the total number of boards being 5. No need to rely on the browser for checking the state change logic.
Using Fibers
The first section of the documentation provides a concise and exceedingly clear history of concurrent programming that motivates the approach. It warrants multiple readings. I wrote this section because I was struggling with how to use Guile Fibers
. While the documentation is truly excellent, I didn’t have the background needed to understand some of the key ideas (channels, operations) and I couldn’t find any tutorials online.
This asynchronous queue is described and implemented using concurrent ML in John Reppy’s Concurrent ML: Design, Application, and Semantics. I recommend that the reader tackle Reppy’s paper first, as it is an excellent introduction to the topic of concurrent programming. This my attempt to clarify my grasp of the concurrent programming model in Fibers
through a detailed examination of a concrete example. The first sections present the building blocks for Fibers
, which are then used in the latter sections to implement, piece by piece, an asynchronous queue. My hope is that readers will feel comfortable with the Fibers
documentation after reading this primer.
Fibers
Concurrency in Guile is provided via fibers, which are lightweight threads. Conceptually, we can think of fibers as closures that are managed by a scheduler. These fibers can execute asynchronously, independently of one another.
In the case of regular Guile, fibers can use as many cores as specified. Guile Hoot
does not support multithreading in the browser, so the concurrency abstraction is implemented with a task processing loop. There is no parallelism gain by using Fibers
in Hoot
. Nevertheless, we can still use this library for implementing event-driven programs running on the browser effectively.
A new fiber is created with (spawn-fiber thunk)
, which will execute the thunk passed as its sole argument running in its own computational context. But we need to be executing in a context for which there is a scheduler (in the case of regular Guile
), or inside a promise, for Hoot
.
For Guile
, the reader should look up run-fibers
in the API documentation. Hoot
requires that the code be run inside a promise, i.e., a function with two parameters, resolved and rejected. The Hoot Documentation provides the example below:
(use-modules (fibers promises)
(fibers timers))
(lambda (resolved rejected)
(call-with-async-result
resolved rejected
(lambda ()
(display "Waiting... ")
(force-output)
(sleep 1)
(display "done!\n")
(force-output)
42)))
Recall that we use this very same structure in our application, and the queue we use executes asynchronously precisely because it runs inside such a context.
Channels
Concurrent programming demands communication and synchronization between threads, which in the case of Fibers
is achieved through channels. This abstraction was introduced by Tony Hoare in his Communicating Sequential Processes paper. Channels are blocking affordances for communication between threads (for us, fibers). A channel will block on a sender until there is a receiver. The converse also holds: a channel will block on a receiver until a message is sent.
John Reppy puts it well: “When a thread wants to communicate on a channel, it must ‘rendezvouz’ with another thread that wants to do complimentary communication on the same channel”. This is a really nice way to say that a channel is not a queue holding messages: channels never own or hold messages. Instead, they provide a mechanism by which a sending thread can coordinate to give a message to another thread that has signaled its willingness to receive the message.
In terms of the API, we can create a channel with make-channel
. As mentioned above, we can send and receive data using channels, as we will see in the next section.
Operations
We need one more concept: a way to reason about asynchronous events, for which Fibers introduces the concept of an operation. An operation can be performed once or many times or none at all. Put another way, a Fiber
operation is a way to express a computing possibility. Operations are first-class values, so starting with the basic operations provided by Fibers
, we can combine and extend them to create new operations that can be used to model complex behaviour.
To make all of this more concrete, let’s consider the basic operations on channels provided natively by Fibers
: (get-operation ch)
expresses a willingness to receive a message from channel ch
. Conversely, (put-operation ch message)
captures the desire to put the message msg
on channel ch
.
By themselves, these operations will do nothing and will coordinate nothing. We need to invoke the Fibers
function (perform-operation op)
for something to happen. If something does happen, we say that the operation has succeeded. Note that (perform-operation op)
returns the values due to performing op
and will block until it can complete.
Thus, if we actually want to wait on channel ch
to receive data, we need to use the function perform-operation
on (get-operation ch)
. Because this is a common activity, Fibers provides (get-message ch)
, which is equivalent to (perform-operation (get-operation ch))
. Similarly, (put-message ch msg)
is shorthand for (perform-operation (put-operation ch msg))
. Two fibers can interact and exchange information on a channel by having one of them invoking put-message
while the other executes a corresponding get-message
. As mentioned before, one of the threads will block until the other makes it possible for synchronization to occur.
Asynchronous Queue
Back to our queue: we want to express our willingness to either put data on the queue if it is empty or, if the queue holds some event messages already, to carry out one of two actions: put one more event message on the queue, or to take the message off the head of the queue for processing the event (in the event loop). We are modelling all possibilities for operations on a queue, regardless of whether it is empty or not.
Let’s say that we can encode this complex desire as a brand-new operation that we will call queue-op
. We will then invoke perform-operation
on queue-op
so that, nondeterministically, one of the valid actions on the queue can take place through the success of one of the constitutive operations. We make no claims about which action takes place nor do we care about the order–this is the power of the abstraction that lets us reason about asynchronous events without any reference to timing or sequence.
In the web application example mentioned above, we rely on a put-message
for the start-event
. Were this put-message
not executed, the event loop would block and nothing would happen. We might be tempted to think we could use get-message
and put-message
operating on a channel to encode the logic described above. However, these functions are blocking and we can foresee a scenario where get-message
gets ahead of put-message
and we end up deadlocked. More importantly, recall that channels are not queues.
So how do we build an asynchronous queue for our events? The answer is that we need to rely on two channels and a simple data queue. These pieces come together by using Fibers
operations to express the complex operation queue-op
described above.
Modelling the Queue
A first stab:
(if (q-empty? q)
;; then clause -- the queue is empty
(put a message from a receiving channel
on the queue)
;; else clause -- the queue has stuff in it
(or (put a message from a receiving channel
on the queue)
(take message from head of q and put it
on broadcasting channel))
)
We know how to express the desire to get a message from channel ch
: (get-operation ch)
. But we want to take a message msg
from a channel ch
and put it on the queue.
(wrap-operation op fn)
is a primitive provided by Fibers
that allows us to create a complex operation that takes the value returned by the succeeding operation op
and then apply fn
to it. We can express the idea of taking a message msg
from channel enq-ch
and then enqueueing it on back-queue
with
(define (enq-op)
(wrap-operation (get-operation enq-ch)
(lambda (msg)
(enq! back-queue msg))))
Next, we need to encode the idea of taking the value at the head of a queue and putting it on the channel deq-ch
:
(define (deq-op)
(put-operation deq-ch (deq! back-queue)))
Easy peasy!
Modelling the Queue II:
Back to our first stab: expressing the intent of the then clause is straightforward–we just use enq-op
. But how do we express the idea of the else clause? For this we use the primitive function (choice-operation op1 ...)
, which combines the op
s provided in such a way that only one of them succeeds, nondeterministically. Accordingly, expressing the idea that a queue that is not empty can either dequeue or enqueue values is given by:
(choice-operation enq-op deq-op)
Now we can take this complex operation and use perform-operation
to act on our queue:
(perform-operation
(if (q-empty? back-queue)
(enq-op)
(choice-operation enq-op deq-op)))
The argument provided to perform-operation
is the queue-op
operation we had hoped for!
It’s worth revisiting our original idea of queue-op
:
(if (q-empty? back-queue)
(enq-op)
(choice-operation enq-op deq-op))
models the operations on a queue, i.e., it abstracts the notion of a queue in terms of the possible operations that can take place on it without making any assumptions about their possible sequence. It just says: “An empty queue can receive a value; a non-empty queue can receive a value or yield one”. This operation expresses our willingness to execute, nondeterministically, some operation on a queue.
Modelling the Queue III:
The last piece: our queue needs to be ready to operate on values either as they become available (enqueue) or needed (dequeue) over time:
(let forever ()
(perform-operation
(if (q-empty? back-queue)
(enq-op)
(choice-operation enq-op deq-op))))
Our complex operation will block if there is no corresponding operation to any of its basic constituent operations. Recall that channel communication takes place only when we have reciprocal operations. Our infinite loop waits to enqueue a value if the queue is empty or, if the queue has at least one value, either to enqueue or dequeue a value. But we should provide a clean exit from the loop so that resources can be reclaimed when we are done with our application.
We can express the intention to exit the loop via a stop-op
operation. We use the built-in primitive operation, (wait-operation cvar)
, which waits until the Fibers
condition variable cvar
is signalled with the function (signal-condition! cvar)
. We use wrap-operation
again to set a boolean variable that makes it possible to exit the loop whenever wait-operation
succeeds:
(define (stop-op stop?)
(wrap-operation (wait-operation stop?)
(lambda ()
(set! keep-going? #f))))
But now we have to modify our complex operation queue-op
, since we need to entertain the possiblity that stop?
has been signalled under either scenario, so we include stop-op
in both branches. This gives rise to
(let forever ()
(if keep-going?
(begin
(perform-operation
(if (q-empty? back-queue)
(choice-operation (enq-op) (stop-op))
(choice-operation (deq-op) (enq-op) (stop-op))))
(forever))
(display "Message Loop finished!\n")))
We now have a way to model activity on a queue with a termination condition that can be triggered by any function that has access to the Fibers
condition variable stop?
.
This is a good place to revisit the two infinite loops we have seen so far: the event loop listens on deq-ch, for which deq-op
provides the complimentary sending operation. In other words, when a value is dequeued, it is then processed by our event loop.
The other loop, the queue message loop, is also always listening for messages to be enqueued via the enq-ch
channel. But this is the same channel that browser events will use to put messages on the queue with something like
(put-message enq-ch evt-data)
where evt-data
is the encoding of the browser event, which might include event type as well as any of the event data that will be needed for processing.
Whenever we use channels, it is always important to understand the interaction between corresponding sending and receiving.
The Queue: The Whole Story
Now we are ready to look at all the code for the implementation of the asynchronous queue:
(define* (inbox)
(define enq-ch (make-channel)) ; inbound messages
(define deq-ch (make-channel)) ; outbound messages
(define stop? (make-condition))
(define back-queue (make-q))
(define (start-inbox-loop)
(define keep-going? #t)
;; Incoming
(define (enq-op)
(wrap-operation (get-operation enq-ch)
(lambda (msg)
(enq! back-queue msg))))
;; outgoing
(define (deq-op)
(put-operation deq-ch (deq! back-queue)))
;; halt the message loop
(define (stop-op)
(wrap-operation (wait-operation stop?)
(lambda () (set! keep-going? #f))))
;; message loop
(lambda ()
(let forever ()
(if keep-going?
(begin
(perform-operation
(if (q-empty? back-queue)
(choice-operation (enq-op) (stop-op))
(choice-operation (deq-op) (enq-op) (stop-op))))
(forever))
(display "Message Loop finished!\n")))))
;; boot it up!
(spawn-fiber (start-inbox-loop))
;; return queues and condition variable for use by browser event loop
(values enq-ch deq-ch stop?))
There are still a few bits we haven’t discussed.
First, the internal function start-inbox-loop
defines the operations that abstract a queue and returns a thunk with the loop that uses them–the queue message loop.
Next, in the body of inbox
, this thunk is executed as a fiber when called by spawn-fiber
. This means that it is executing in its own context on a task loop–this is one reason why it is desirable to have stop-op.
Finally, inbox
returns the channels enq-ch
and deq-ch
as well as the condition variable stop?
These values can be used by the event loop and the browser event handlers to dequeue and enqueue event data, as we have seen.
The code repo has working code using all the bits discussed in this piece.
Further Reading
For Concurrent ML, all paths lead to John Reppy, who has a number of publications. I recommend Concurrent ML: Design, Application, and Semantics, a brief and clear exposition of the main ideas. He also has a book on the topic but I haven’t been able to access it. I also found Synchronizable abstractions for understandable concurrency by Adam Solove really helpful for confirming my understanding of Concurrent ML. Highly recommended.
Andy Wingo, the implementer of Fibers
, has written several blog posts on the topic: Concurrent ML vs Go, where he ponders what approach he should follow for implementing lightweight threads in Guile; Is Go an acceptable cml, which concludes that he must implement CML from scratch; An incomplete history of language facilities for concurrency, where Wingo reviews the work to date with great clarity and succinctness; Growing Fibers, a technical blog on the implementation of fibers; A new concurrent ml, where Wingo discusses the high points of the implementation in Guile of the main features of Concurrent ML.
Racket includes a model of concurrency that is similar to Fibers
but it offers a more extensive API that matches Concurrent ML more closely. Like Guile Hoot, Racket threads all run on one processor. Once again, I haven’t yet found a good tutorial for its use, although the documentation is first-rate. This would be a good platform to explore some of the facilities missing in Fibers
.